No mockups. The actual model.
Most "AI visualizations" you see online are decoration. Animated dots that pulse to a fake rhythm. Particles that don't connect to anything. A nice-looking metaphor with no model behind the curtain.
Neuropulse is the opposite. Every brightness, every line, every motion you see is a direct readout of a real WebGPU buffer in a real Phi-3-mini forward pass. When the model thinks about your prompt, you watch it think — not a representation of it.
Strict 1:1. Every pixel a function of a real tensor.
It runs entirely on your machine. The 3.8B-parameter model is loaded into your GPU's memory, the attention math is done in WGSL compute shaders, and the next-token logits are sampled in your tab. There is no server. There is no API key. Close the tab and the inference stops.
Every part of the model, labeled.
The 3D scene is not a metaphor. Each glowing element corresponds to a specific tensor in Phi-3-mini's compute graph. The 3,072 points of the residual stream are laid out by a PCA of the model's own layer-0 qkv_proj weights — so dims that get read into attention together end up near each other — and on every step the brightness of each point is the live value of that residual dimension. If you hover an attention head, the brightness you see is that head's output magnitude.
- 32 layer rings Each ring is one transformer block. Brightness tracks the post-attention plus post-FFN residual norm for that layer — watch the signal build as the prompt flows upward.
- 32 attention heads per layer Cyan neurons on the outer ring. Each lights up proportional to its head's output magnitude. 1,024 heads in total, all live.
- FFN slab The violet 8,192-neuron expansion. By far the largest compute budget in the model. You can see it pulse as the MLP activates.
-
Residual stream (3,072 dims)
The highway through the network. 3,072 points, one per dim, placed by PCA of the layer-0
qkv_projweights so functionally related dims sit near each other. Brightness on each point is the live residual value at that dim. - KV cache strips The growing memory of past tokens. Each strip is one position; height equals cache fill for that layer.
- LM head Final projection to 32,064 vocab logits. Softmax → next token. The live top-k distribution prints to the side panel as the model decodes.
Cross-checked against reference Phi-3.
"Strict 1:1" is a strong claim, so it has to be falsifiable. Neuropulse ships with a built-in test suite that diffs the WebGPU implementation against a reference HuggingFace fp16 Phi-3-mini on a fixed set of prompts cached as reference.json. Click the wrench icon inside the demo to run it — the actual numbers from your GPU print to your browser console.
What you should expect: tiny deltas at the hidden-state level (the cost of int4 quantization, not implementation drift) and identical top-1 tokens against the fp16 reference on the test set. That last bit is the bar that matters for a faithful rendering — and it's the one you can re-run yourself, on your own machine, in under a minute.
How it's built.
Four pieces. No frameworks for the inference path, no dependency soup, no clever tricks hiding the model from you.
-
WebGPU compute & WGSL 13 pipelines, 22 buffers, 292 dispatches per token. Quantization:
q4f16_1. Hand-written attention and FFN kernels. -
MLC Phi-3-mini weights The same weights as
mlc-ai/Phi-3-mini-4k-instruct-q4f16_1-MLC, fetched directly from HuggingFace and cached in the browser's Cache API. -
Three.js scene Plain
WebGLRenderer. No bloom, no particles, no decorative shaders. Every pixel pulls from a real tensor on every frame. -
PCA layout from the model's own weights Residual points are placed by PCA of layer 0's
qkv_proj.weightcolumns; FFN points by PCA ofdown_proj.weight. Dims that get read or written together end up near each other, so the geometry is shaped by the model itself, not by hand.
See it for yourself.
Open Neuropulse, feed it a prompt, and watch a model think. The first load downloads about 2 GB of weights into your browser cache; subsequent visits start instantly.
Launch Neuropulse →