The prompts were his (with a bit of light suggesting from me), responses from ChatGPT.
Write a proof of of Pythagoras’s theorem.
Pythagoras’s theorem states that in a right triangle, the square of the length of the hypotenuse (the side opposite the right angle) is equal to the sum of the squares of the lengths of the other two sides. Symbolically, this can be written as:
c^2 = a^2 + b^2
where c is the length of the hypotenuse, and a and b are the lengths of the other two sides.
Proof:
Draw a right triangle ABC with sides AB (a), BC (b), and AC (c).
Draw squares with sides AB, BC, and AC on each side of the triangle, as shown below:
[Diagram of a right triangle with squares on each side]
The area of each of the squares is equal to the square of the length of its side. Hence:
a^2 = AB x AB = area of square with side AB
b^2 = BC x BC = area of square with side BC
c^2 = AC x AC = area of square with side AC
Consider the combined area of the squares on the two smaller sides (a and b). It can also be obtained by covering the entire triangle with a larger square and then subtracting the area of the square on the hypotenuse (c). Hence:
a^2 + b^2 = (AB + BC) x (AB + BC) - AC x AC
On the right-hand side of the equation, (AB + BC) is equal to the length of the hypotenuse (c**, since it is the longest side in a right triangle. Hence:
a^2 + b^2 = c x c - c x c
Simplifying the right-hand side of the equation, we obtain:
a^2 + b^2 = c x c
Hence, we have proved that:
c^2 = a^2 + b^2
This is Pythagoras’s theorem.
Ok, that’s great. Now it’s time for some graph theory. Can you give me a proof of the 5-color theorem?
The five color theorem states that any planar graph (a graph that can be drawn on a plane without any of its edges crossing) can be colored with no more than five colors, such that no two adjacent vertices (vertices connected by an edge) have the same color.
Proof:
Consider a planar graph G with a minimum number of vertices, and suppose that it cannot be colored with fewer than five colors.
Since G has a minimum number of vertices, there must be a vertex v in G with the smallest degree (the number of edges incident to v). Let d be the degree of v.
Since G cannot be colored with fewer than five colors, d must be greater than or equal to 4.
Consider the set of vertices adjacent to v, and let S be the set of colors that have been used to color these vertices.
Since the degree of v is greater than or equal to 4, the number of colors in S is greater than or equal to 4.
Suppose, S contains all five colors. Then, since each color can be assigned to at most one of the vertices adjacent to v, we have a contradiction, as there are more vertices than colors.
Hence, there must be a color not in S. Call this color c.
Color vertex v with color c. Since no two adjacent vertices have the same color, it follows that G can be colored with no more than five colors.
This proves the five color theorem: any planar graph can be colored with no more than five colors, such that no two adjacent vertices have the same color.
Ok, can you prove the 4-color theorem as well?
The four color theorem states that any planar graph can be colored with no more than four colors, such that no two adjacent vertices have the same color.
Proof:
Consider a planar graph G and let H be a subgraph of G.
We proceed by induction on the number of vertices of H. If H has only one vertex, it can be colored with a single color.
Suppose that the statement is true for all subgraphs of G with at most n - 1 vertices, where n is a positive integer.
Let H be a subgraph of G with n vertices, and let v be a vertex in H with the minimum degree.
By the induction hypothesis, all subgraphs of H with fewer than n vertices can be colored with at most four colors.
Consider the set of vertices adjacent to v in H, and let S be the set of colors used to color these vertices.
If |S| < 4, then we can color vertex v with a color not in S, and H can be colored with at most four colors.
Otherwise, |S| = 4. We can color v with any color not in S, and since all subgraphs of H with fewer than n vertices can be colored with at most four colors, it follows that H can be colored with at most four colors.
Hence, the statement is true for all subgraphs of G with at most n vertices.
Since the statement is true for all positive integers n, it follows that any planar graph can be colored with no more than four colors, such that no two adjacent vertices have the same color.
This proves the four color theorem: any planar graph can be colored with no more than four colors, such that no two adjacent vertices have the same color.
Shock, really. The proof of the five colour theorem is an assignment he might set in one of his early-year undergraduate classes, and the proof given certainly looks ok. It might well be wrong (he didn’t look at it in detail) but it’s the sort of thing a harried marker would assume was written by a student who had some idea of what was going on in class. The four colour theorem is a much trickier proof—it was an unsolved research problem until 2005. The proof for that one is almost certainly wrong, but that would only be obvious to someone with considerable expertise in the domain (graph theory). Such is the way of LLMs.
What does it all mean? I dunno. It’s gonna be a wild decade.
]]>Here’s some very unscientifc benchmarks from my Jekyll (v4.3) builds on my M1 Max MBP (64GB).
First, this is the previous version of Ruby, which I’ve been happily working with for ages.
bundle exec jekyll build
Averaged over 3 runs, the build time was 17.7 (±0.8) seconds.
After upgrading to the latest Ruby v3.2.0, the average over 3 runs was 17.2 (±0.2) seconds.
I followed these instructions to build a YJIT-enabled Ruby v3.2. Then, after re-installing all the deps:
RUBY_YJIT_ENABLE=true bundle exec jekyll build
The build was heaps faster—the average over 3 runs was 3.7 (±0.2) seconds. That’s a 4.8x speedup.
Obviously, like all benchmarks on the internet this one is wrong & stupid and you should do your own testing. Three runs probably isn’t enough to shake out any cache effects, and I don’t know (but could guess?) that the JIT might have even more pronounced cold start issues (although I didn’t really see it above). Anyway, if you build a lot of Jekyll sites, my anecdotal evidence is that you can save yourself a bunch of time, and tighten those feedback loops to stay in the flow state. Have fun!
]]>In terms of the opportunities to incorporate, explore and critique new tools in the classroom, I’m actually kindof excited. I’ve written elsewhere about how you might be able to do this. The tl;dr is that you should look at your class activities (including assessment items) and try and actually do them with ChatGPT, just to see how it goes. OpenAI have also just released some stuff about considerations for educators.
At this stage these are just my own personal thoughts as a teaching academic; I’m not wearing my Associate Director (Education) hat here, and this is not an official (or unofficial) ANU School of Cybernetics policy. If you’re also putting together some guidelines for your own classroom, then questions/comments/suggesions are welcome—do get in touch.
Here are my current thoughts what some good use of ChatGPT in the classroom guidelines might look like. It doesn’t have everything precisely defined, but it gives you an idea of how I want to run my classes, balancing the opportunities and challenges these tools present for student learning.
Unless otherwise specified, you are allowed to use ChatGPT^{1} in this class, including in work submitted for assessment.
Wherever ChatGPT is used it must be cited according to the OpenAI citation instructions.^{2}
You are responsible for everything you submit. “It’s not my fault—the AI generated text introduced non-sequiturs/errors/plagiarised text/offensive language” will never get you off the hook; if it’s in your submission you’re responsible for it just as you would be if you’d written it without ChatGPT.
You are expected to be able to explain (to your tutor, lecturer or course convenor) any assessment submission to demonstrate your understanding of the concepts being assessed.
Any violations of the above will be considered a potential breach of academic integrity under clause 9 of section 12(2) of the ANU Academic Integrity Rule “improperly recycles work or otherwise improperly submits or publishes work that is not original” (note: I’m unsure on which clause is best to use here—could be clause 8, could be one of the others as well).
No “is-this-written-by-an-AI?” detection tools (e.g. this) will be used as part of the marking process.
One open question (not necessarily part of the student-facing guidelines, but relevant for anyone running a course) is what guidance should be given to the markers (e.g. tutors/TAs) on what to do when marking ChatGPT-generated content. Should submissions created with the help of ChatGPT be marked lower than “equivalent standard” (whatever that means) submissions that aren’t?
Anyway, these are just some draft thoughts—I’ll keep this post updated as my thinking changes.
From the ANU Academic Integrity Rule 2021 Section 12 (2), here’s the list of what constitutes a breach of the academic integrity principle.
(2) For this instrument, a student breaches the academic integrity principle if, in scholarly practice, the student:
- cheats; or
- impersonates another person; or
- engages in plagiarism; or
- colludes with another person; or
- improperly shares material with another person; or
- engages in contract cheating or improperly engages another person to prepare, or assist in preparing, work for the student; or
- submits or publishes anything that fails to correctly or appropriately acknowledge the work of another person or otherwise improperly appropriates the intellectual property or contribution of another person; or
- otherwise passes off the work of another person as the student’s own work; or
- improperly recycles work or otherwise improperly submits or publishes work that is not original; or
- takes a prohibited item into an examination or other assessment venue or otherwise breaches the University’s directions (however described) in relation to an examination or other assessment; or
- fabricates or falsifies any document, data or other information, or anything else, including, for example, by intentionally omitting data to obtain a desired result, or by falsely representing observations as genuinely held; or
- otherwise intentionally or recklessly engages in conduct:
- that impedes the progress of research; or
- that risks corrupting research records or compromising the integrity of research practices; or
- that uses research data from another person without appropriate acknowledgement; or
- that breaches a research protocol approved by a research ethics committee or a statutory licence condition applying to research; or
- otherwise engages in conduct with the intention of gaining, or assisting another person to gain, an unethical, dishonest, unfair or unjustified advantage; or
- otherwise engages in conduct, or assists another person to engage in conduct, that is unethical, dishonest or unfair; or
- engages in any other conduct declared to be academic misconduct by the orders.
My commentary on the above (and IANAL) is that none of those points really capture the specific case of “ChatGPT wrote this essay, not the student”, in particular because so many of the definitions reference “of another person”. I’m sure this language will be updated in the future in light of the widespread availability of generative AI tools.
Wherever ChatGPT is named in these guidelines it should be read as “ChatGPT and other generative AI tools”, where those tools are defined according to (ERROR: definition not found). Any guidelines which are restrict themselves to specific AI tools by name are doomed to become out of date real fast. ↩
These guidelines deliberately doesn’t try to address the (important) issue of AI tools and the way they appropriate the skilled labour of the millions of individuals who created, edited and labelled the data on which they were trained. ↩
His thesis, Neurological Metaphor in Deep Learning: Issues and Alternatives, is available online through the ANU Library. It’s a really interesting mix of insightful scholarship and practice-based design interventions. In other words, he has interesting things to say and he built interesting things (some of which are available online).
Here’s the abstract:
]]>Representations of deep learning—discursive, historical and diagrammatic—are structured by a neurological metaphor that overstates a likeness to the brain and disguises other ways of understanding the technology. These neurological representations muddle the crucial public debate even as deep learning is applied in high-stakes applications, particularly in institutions of social and political power. This thesis draws on historical sources and contemporary literature to trace the development and contemporary expression of the neurological metaphor in deep learning discourse; particularly with respect to the field’s terminology, the telling of its history, and the drawing of its diagrams. In the manuscript and in three documented practice-based works, I propose alternative metaphors for deep learning—divination, surveillance and memory—to highlight sociotechnical concerns posed by the technology. As a highly interdisciplinary project, this thesis applies a range of methods drawn variously from digital humanities, discourse analysis, human-centred computing, visual arts and design, and deep learning itself. The traditional scholarship and practice-based aspects of the thesis are situated in contemporary debates of AI bias and interpretability, and the role of deep learning in systems of power.
Since the cafe uses Square for all payments, I was able to set up a webhook so the app would receive the “new sale” notification ASAP—this should be both lower-latency and more efficient than polling.
The app is basically a single Phoenix LiveView. Sadly the Square guides don’t have examples for Elixir, although it’s pretty easy to modify the e.g. Ruby example code to get the job done. If you’re looking to do something similar I cobbled together this info from docs (and a few blogs) and it might help you out to have it all in one place.
It’s important to validate that any incoming webhook is actually from Square, so
Square send a special x-square-hmacsha256-signature
header for validation
purposes, although
performing this validation step requires having access to the raw request body.
Thankfully, the “Custom Body Reader” section in the Plug.Parsers
docs shows
how to do exactly that—just follow the instructions there.
The webhook controller module should look something like this (replace the
notification_url
and signature_key
with the right values for your
app—you’ll get your signature key from Square when you register the webhook):
defmodule MyAppWeb.SquareWebhookController do
@moduledoc """
Handle webhooks sent from Square.
"""
use MyAppWeb, :controller
@doc "handle the webhook request"
def webhook(conn, params) do
if is_from_square?(conn) do
do_stuff(params)
end
send_response(conn)
end
@doc "respond to the Square server (always 200 OK otherwise they'll freak out)"
defp send_response(conn) do
conn
|> put_resp_content_type("text/plain")
|> send_resp(200, "webhook received - thanks.")
end
@doc "returns `true` if webhook came from Square, `false` otherwise"
defp is_from_square?(conn) do
notification_url = "http://example.com/square/webhook"
signature_key = "WEBHOOK_SIGNATURE_KEY_FROM_SQUARE"
{_, signature} = List.keyfind!(conn.req_headers, "x-square-hmacsha256-signature", 0)
## here's where we access the raw request body we put there in the Plug.Parser
raw_body = Enum.join(conn.assigns.raw_body)
hash =
:crypto.mac(:hmac, :sha256, signature_key, notification_url <> raw_body)
|> Base.encode64()
signature == hash
end
end
Finally, add it to to your router.ex
- something like this, you know the
drill.
scope "/square", MyAppWeb do
pipe_through :api
post "/webhook", SquareWebhookController, :webhook
end
After that’s all done (and you’ve deployed your app) you’re ready to set up a
webhook subscription.
Follow the Square docs and Square will start hitting your (deployed) app’s
https://example.com/square/webhook
endpoint, and your app can do its thing.
Note that these incoming webhook requests won’t hit your local development
server running on localhost
, so testing webhooks is a bit trickier. Since my
app runs on fly it involved a little bit of IO.inspect
ing in
production and then looking at the logs with flyctl logs
.
Have fun! And if you live in Canberra, especially in Tuggeranong/Lanyon, maybe go buy a coffee from Little Luxton and you can see it for yourself 😊
]]>However, it turns out that cloudstor is shutting down at the end of this year, so I need to find a new home for my backups. I’ve been told that the “replacement” is to use my institutional OneDrive account.
My feelings about closing down useful (local) infrastructure and centralising things on one of the big cloud providers notwithstanding, this should be an easy change—rclone supports OneDrive, so it’s a simple config change to move over.
However, I figured I’d take the opportunity to fix something which had long bothered me. I have a lot of stuff in git repos which don’t need to be backed up with rclone, and they just slow down the backup process. I wanted to get rclone to ignore all git repos, and although it has an include/exclude filtering system it’s not easy to get it to filter all git repos (trust me, I checked).
Now that it’s summertime I had a chance to make it work, and since that forum thread is now locked I’m posting it here to save you some time, dear reader, if you ever want to do this for yourself. Feel free to use/modify it (MIT Licence) if it’s helpful.
#!/bin/bash
EXCLUDE_FROM_FILE="/tmp/rclone-excludes.txt"
CLONE_ROOT_DIR=~/Documents
REMOTE=anu-onedrive
## find all git repo enclosing folders (including trailing slash), munge them
## into the form that rclone expects for its "exclude from" file
cd $CLONE_ROOT_DIR && find . -type d -name .git -exec dirname {} \; | sed -e 's/$/\//' -e 's/^.//' > $EXCLUDE_FROM_FILE
## add a few extra excludes
echo ".DS_Store" >> $EXCLUDE_FROM_FILE
## sync to remote
rclone sync --progress --exclude-from=$EXCLUDE_FROM_FILE $CLONE_ROOT_DIR $REMOTE:mitch-rclone/$CLONE_ROOT_DIR
]]>Here’s—and this is the honest truth—the first conversation I had with it. The initial paragraph was from the ANU School of Cybernetics website, and from there we just freewheeled. My inputs are in regular type, the ChatGPT responses are in the coloured boxes.
begin chat transcript
The 3A Institute sits within the School of Cybernetics with the mission to build a new branch of engineering to take AI-enabled cyber-physical systems to scale and to create a new generation of practitioners with the skills and knowledge we need to help shape this future safely, sustainably and responsibly.
As people invent, commercialise and operationalise new technologies, new practitioners emerge to manage them. Just as the invention of programmable computers led to the professionalisation of software engineers, cyber-physical systems will lead to a new type of engineer.
We don’t know what this new type of engineering will be called yet, but we know we need it if we are to build a safe, sustainable and responsible future.
Give 5 concrete examples of how ideas from cybernetics can help:
Can you try again, but this time using the 19th Century Australian Overland Telegraph as a framing device?
Sure, here are five examples of how ideas from cybernetics can help using the 19th Century Australian Overland Telegraph as a framing device:
Ok, that’s better—I think it’s important that we provide a uniquely Australian viewpoint on what cybernetics is and how it can help tackle the challenges we face in the 21st century. As a new School of Cybernetics, then, what should our top priorities be?
As a new School of Cybernetics, some top priorities could include:
end chat transcript
Well, wow. This required no re-prompting or editing on my part.
I should say that I share all of the skepticism (and fears!) of folks who are pointing out that these language models don’t actually understand things. In some cases this means the output of these models is benign but incorrect, and it’s easy to think of situations where real harm could be caused.
Upon close reading, I don’t think the above responses capture the full breadth of what we’re trying to do here in the School of Cybernetics. But I’d be lying if I wasn’t impressed.
As I tweeted earlier this morning:
all this ChatGPT stuff means that we as a society are gonna have to get a lot better at drinking from the bullshit firehose, and fast
I’ll pen some more thoughts about what it all means when I’ve had some more time to digest things.
]]>I wrote an essay on the School of Cybernetics website which goes into more detail about how it works and what it all means (I mean, it’s an artwork, so mostly I built it because I could, but some folks wanna know “what’s the practical application of this stuff” so I wrote some words to give plausible deniability).
Here’s a 90min video of every single stable-diffusion image generated over the full 2 weeks of the exhibition. Not necessarily the sort of thing you’d sit down and watch from start to finish, but fun to dip in and out of. Even at 8fps it’s still possible to watch the gestalt change over time (and it’s also interesting to see which prompts the humans in the room put in).
On the livecoding front, Ushini & I did another livecoding set at the OzCHI 2022 conference—pics & video on the livecoding page.
]]>From the NGA website:
Experience an immersive cybernetic walking tour of the National Gallery after dark. Travel through familiar and behind-the scenes spaces and investigate the dynamic relationships between humans, technology and the built environment found within works of art and the Gallery itself. Wear your walking shoes and join us as we explore previously hidden parts of the building and gardens.
Held of over one week during the Gallery’s 40th birthday celebrations. This special event is co-presented by the National Gallery and the ANU School of Cybernetics.
I’m not sure if there are any tickets left, but if there are you should definitely check it out. You won’t get another chance to experience one of Canberra’s cultural institutions in this way for a long time.
]]>