Cybernetic Studio Launch @ SXSW Sydney

I’m launching the Cybernetic Studio next week (Oct 15) at SXSW Sydney. If you’re attending, come to Chippendale on Wednesday afternoon (search for Cybernetic Studio Launch in the SXSW app for details) and see what we’ve been building.

The Cybernetic Studio is a lab-style initiative exploring responsible innovation through creative, embodied approaches to technology. Which is a fancy way of saying: we build things to think with1, and we try to do it in ways that don’t just reproduce the surveillance capitalism playbook. Nam Jun Paik used technology in order to hate it properly, the Studio aims to carry on that noble tradition.

The launch project is “Human Scale AI”—an attempt to slow computations down and blow them up so we can actually see them at work. Two artefacts I’ll be showing:

  • Perceptron Apparatus: a 1.2 metre diameter wooden table that functions as an “Abacus for AI”. Is it an abacus? Is it an ouija board? No, it’s a physical device capable of performing the Artificial Neural Network calculations that underpin all of modern AI. The apparatus can actually perform real AI tasks (e.g. image recognition, data classification) if you have a lot of time and a willingness to multiply numbers using concentric wooden rings. It’s part séance, part computational theatre, part “explainable AI”, and asks where the “intelligence” lives when the computation is distributed across human operators and a wooden table.

  • My First LM: a hands-on activity where you build and use a language model with paper grids, dice, and children’s books. You manually create word co-occurrence grids (the training phase), then use dice to sample from those distributions to generate new text (the prediction phase). It’s remarkably effective at teaching the train-predict-sample cycle—people get it in a way that no amount of explaining “transformers” or “attention mechanisms” ever achieves. The activity scales up through different ideas in Large Language Models—bigram/trigram models, embedding vectors, alternative sampling procedures, even LoRAs—to show how the same model can behave differently depending on how you use it.2

Both pieces are about materiality and scale. Can you understand something differently when you can touch it, when it operates at human perception speed, when you can see all the moving parts at once? I suspect yes, but that’s what the studio is for—figuring out what actually works.

The event is hosted with Dr. Cath Ball, who always brings the juice. Should be good conversations.

It’s ticketed, but if you’re a SXSW badge-holder you you can register via the app. Come along, have a few drinks on us, see some interesting human-scale AI artefacts, have some interesting conversations about where this is all heading.

  1. This is the core of research-through-practice—you can’t think your way to understanding complex sociotechnical systems, you have to build them and watch how they fail. 

  2. There’s a particular kind of technical literacy I’m after here. Not “everyone should learn to code”, but rather “everyone should have access to mental models that aren’t just anthropomorphic vibes about what AI wants or thinks.” 

github bluesky vimeo graduation-cap rss envelope magnifying-glass vial mug-hot heart creative-commons creative-commons-by creative-commons-nc creative-commons-sa