micro-gpt across the abstraction stack

A 4,192-parameter transformer (Karpathy's microGPT) implemented from scratch and benchmarked on eight substrates: pure Python, NumPy, MLX-CPU/GPU, an FPGA, hand-written C+NEON, and WebAssembly running in your browser.

▶ Try it now (live WASM demo)

Generates plausible names character-by-character, in your browser, at ~1.34M tok/sec on M4 Pro (24 GB). Includes a one-click benchmark.

📊 Full report (interactive charts)

Six Plotly charts covering throughput, multi-stream scaling, the educational ladder, the quantization study, the NumPy time breakdown, and a vs-FPGA overview.

Headline findings