Back to articles
Embodied AI: Why I Gave My Home Robot an "Eye in the Sky"

Embodied AI: Why I Gave My Home Robot an "Eye in the Sky"

via Dev.to Pythonsusanayi

This is part of a series on building a training-free home service robot using VLMs, RAG memory, and manifold learning. This post covers the camera architecture — specifically, why fixed ceiling-mounted nodes ended up as the foundation of the whole perception system. Honestly, my first instinct was: why not just use the robot's onboard camera? It's the obvious answer. The robot is already in the room. It already has a camera. Adding twelve fixed ceiling nodes sounds like unnecessary complexity — more hardware, more calibration, more failure points, for a system that was already complicated enough. My advisor's requirement was firm: the AI pipeline must take its visual input from fixed global cameras, not from the robot itself. No negotiation on that point. So I spent a while sitting with that constraint, trying to understand it rather than just comply with it. This post is what I figured out. It starts with the genuine question I had — why global cameras at all? — and ends with the engi

Continue reading on Dev.to Python

Opens in a new tab

Read Full Article
7 views

Related Articles