
Building OmniGuide AI — A Real-Time Visual Assistant with Gemini Live
Introduction What if AI could see what you see and guide you in real time? That idea led to the creation of OmniGuide AI, a real-time multimodal assistant powered by Gemini Live API and deployed using Google Cloud Run. Instead of typing questions into a chatbot, users simply: Point their phone camera at a problem Ask a question using voice Receive live spoken guidance and visual overlays OmniGuide acts like an expert standing beside you, helping with tasks like repairing devices, cooking, learning, or troubleshooting. This article explains how we built OmniGuide AI using Google AI models and Google Cloud, for the purposes of entering the #GeminiLiveAgentChallenge. The Idea Most AI assistants today require typing prompts. But real-world problems happen in physical environments: Fixing a leaking pipe Understanding a device error Cooking a recipe Solving homework OmniGuide AI bridges the gap by combining: Live camera input Voice interaction AI reasoning Real-time guidance Tech Stack OmniG
Continue reading on Dev.to
Opens in a new tab


