Perceptron — Deep Dive

Interactive explorer: geometry, anatomy, learning rule, and XOR limitation

The perceptron draws a line in input space. Points on one side → class 1, other side → class 0. Drag the sliders to rotate and shift the boundary.

1.0
1.0
0.0
z = 1.0·x₁ + 1.0·x₂ + 0.0

The weight vector w = (w₁, w₂) is drawn in orange — it is always perpendicular to the decision boundary. This is the key geometric fact.

Click any part of the diagram to learn more.

Click a node above to inspect it.

Watch the perceptron learn. It makes a prediction, checks if it's wrong, then nudges its boundary toward the mistake.

0
epoch
last error
accuracy
misclassified
η = 0.30
w ← w + η·(y − ŷ)·x

XOR cannot be solved by any single straight line — the perceptron's fundamental limitation.

The fix: stack two perceptrons → hidden layer → the network can learn non-linear boundaries. That is the leap from a single perceptron to a neural network.