Mini neon perceptron: XOR edition
3 Mar 26
The neon perceptron is coming along, but before we (Brendan Traw and I) commit to the full 5×5 input build with custom PCBs and nine hidden neurons and seven-segment displays, we’re building a smaller one first. A 4-input, 3-hidden, 2-output mini version—just enough to be interesting, and just small enough that we can actually wire it up and debug the thing without losing our minds.
The framing for this smaller version is XOR, which turns out to be a pretty nice fit. In 1969, Marvin Minsky and Seymour Papert published Perceptrons—a mathematical analysis of what Frank Rosenblatt’s single-layer perceptron could and couldn’t do. The headline result was that a single-layer perceptron can’t compute XOR. It can do AND, it can do OR, but it can’t learn to output “true” when its inputs disagree. The function isn’t linearly separable, and that’s that. The AI winter ensued. But then they figured out that adding a non-linear activation function and everything was hunky-dory.
#The mini-perceptron setup
The mini perceptron has a 2×2 input grid—four pixels. The task is diagonal detection: light up one diagonal (top-left and bottom-right) and the network should activate one output; light up the other diagonal (top-right and bottom-left) and it should activate the other. This is a 2D XOR problem—the network needs to distinguish between the two diagonals, which requires looking at the combination of inputs rather than any single pixel.
The two output nodes represent the two diagonals. Whichever has the higher activation gets a glowing halo—that’s the network’s answer.
Between input and output sit three hidden neurons with tanh activation. The output layer uses softmax, so the two outputs sum to 1 and you can read them as a confidence distribution.
Click on the 2×2 input grid (the blue squares on the left) to light up pixels and watch the activations flow through the network. The coloured wires show what’s happening: orange for positive activations, blue for negative, with brightness and thickness indicating magnitude. Use “Reset” to clear the inputs and “Randomise” to generate a new set of weights (as per the previous post these web versions don’t actually train the model, they’re just to get a sense of how the apparatus’ll look). The wire gamma slider adjusts how visible weak connections are. You can orbit the camera by dragging on the background, and scroll to zoom.
The network has 12 weights from input to hidden (4×3) and 6 from hidden to output (3×2). That’s 18 weights total.
#What’s next
This mini version is a prototype for the full neon perceptron—a chance to test the physical design (flexible LEDs as wires, potentiometers for weights, the overall form factor) before scaling up to the 5×5 / 9-hidden / 10-output version. If the mini version works and looks good, we’ll know the concept holds up. If it doesn’t, better to find out with 18 weights than 315.
Cite this post
@online{swift2026miniNeonPerceptronXorEdition,
author = {Ben Swift},
title = {Mini neon perceptron: XOR edition},
url = {https://benswift.me/blog/2026/03/03/mini-neon-perceptron-xor-edition/},
year = {2026},
month = {03},
note = {AT-URI: at://did:plc:tevykrhi4kibtsipzci76d76/site.standard.document/2026-03-03-mini-neon-perceptron-xor-edition},
}