
LiDAR Depth Maps on the GPU in React Native — And a Dawn Upstream Contribution
In Part 1 , I built a zero-copy GPU compute pipeline for camera frames. In Part 2 , I got Apple Log HDR working with LUT color grading. This post covers what happened next: getting LiDAR depth data into the same GPU pipeline, the format wars that ensued, and submitting my first upstream patch to Dawn (Google's WebGPU implementation). The Goal iPhone Pro models have LiDAR sensors that produce real-time depth maps at 320×180 at 60fps. I wanted to feed that depth data into the same WebGPU compute pipeline as the camera video — one shader that reads both the camera frame and the depth map, producing a depth-colored overlay blended with the live video. The API I wanted: const { currentFrame } = useGPUFrameProcessor ( camera , { resources : { depth : GPUResource . cameraDepth (), }, pipeline : ( frame , { depth }) => { ' worklet ' ; frame . runShader ( DEPTH_COLORMAP_WGSL , { inputs : { depth } }); }, }); Declare a dynamic depth resource. Bind it as a shader input. The native side handles AV
Continue reading on Dev.to
Opens in a new tab



