
How I Carve Objects Out of Depth Instead of Texture
A depth pipeline should behave like a carpenter reading a level, not a photographer admiring a picture. It thresholds, groups, checks for discontinuities, and validates whether the resulting surfaces can be trusted. That framing is the whole point of what I built: a segmentation path that still has something useful to say when the room is nearly dark and the RGB frame is useless. The failure that started this pipeline was an image that looked worthless while the depth map still had structure. Once I saw that, segmentation stopped being a color problem and became a geometry problem: if the scene is dark enough, texture is the wrong witness, and the depth field is the one telling the truth. The shape-first path The route I care about starts in the web app, but the important work happens on the GPU server. The browser sends a depth payload to the API route, and that route forwards the request to the depth segmentation service. From there, the server turns a depth map into labeled regions
Continue reading on Dev.to
Opens in a new tab


