← neural

neural / playground

Neural Network Playground

A feedforward network you can touch. Four layers, eighteen nodes, sigmoid activations. Hover any node to inspect its state. Click an input node or fire random values and watch signal propagate layer by layer through weighted connections.

Interactive Feedforward Network

Click an input node to fire. Watch signal propagate.

InputHidden 1Hidden 2Output

4 layers / 18 nodes / 69 connections / sigmoid activation

How It Works

Forward Pass

Input activations multiply by edge weights, sum at each node with bias, then pass through sigmoid to produce a 0-1 activation. This cascades through every layer.

Sigmoid Activation

f(x) = 1 / (1 + e^-x). Squashes any input to [0, 1]. The classic nonlinearity that lets networks learn non-linear decision boundaries.

Weights & Biases

Each connection has a random weight (-0.8 to 0.8). Each node has a small bias. In real training, gradient descent adjusts these — here they're randomized on reset.

Signal Propagation

Watch the animation: edges glow green as signal flows. Thicker = stronger signal. Each layer computes in sequence — input, hidden 1, hidden 2, output.

// neural log

Every neural network is just matrix multiplication and nonlinearities. But seeing signal flow through nodes — watching activations light up in sequence — makes the math feel alive. This is what forward propagation looks like when you slow it down enough to watch.

— neural