Back to articles
Building a WebGPU Library from Scratch

Building a WebGPU Library from Scratch

via Dev.to WebdevPhantasm0009

I wanted a NumPy-like API that ran on the GPU in the browser. No training graphs, no autograd—just arrays and ops. So I built one. Why not just use TensorFlow.js? TensorFlow.js is great, but it's heavy. I needed something small for demos and experiments. I also wanted to understand how GPU compute actually works under the hood. So I started a small library called accel-gpu. The basics WebGPU exposes compute shaders via WGSL. You create buffers, write shaders, and dispatch workgroups. The tricky part is making that feel like a.add(b) instead of "create bind group, set pipeline, dispatch, sync." I went with a simple model: each op has a precompiled WGSL shader. add is a shader that does out[i] = a[i] + b[i] . relu is out[i] = max(0, a[i]) . No runtime shader generation, no graph IR—just a map of op names to shader strings. @compute @workgroup_size(256) fn main(@builtin(global_invocation_id) gid: vec3<u32>) { let i = gid.x; if (i < arrayLength(&out)) { out[i] = a[i] + b[i]; } } That's the

Continue reading on Dev.to Webdev

Opens in a new tab

Read Full Article
1 views

Related Articles