
I Built a Voice-Powered Task Manager with AI in 650 Commits — Here's What I Learned
Five months ago, I had a simple problem: my family of four couldn't keep track of groceries, appointments, and who's picking up the kids. Post-it notes on the fridge weren't cutting it. So I built TAMSIV — a voice-powered task and memo manager with AI. You speak, the AI understands, and everything gets organized automatically. No typing, no tapping through forms. Just talk. 650 commits later, here's what I learned building it solo. The Stack Frontend : React Native 0.81 + TypeScript (New Architecture enabled) Backend : Node.js/Express with WebSocket for real-time voice streaming Database : Supabase (PostgreSQL) with Row Level Security AI : OpenRouter (400+ LLM models), Deepgram (STT), OpenAI (TTS) Hosting : Railway (backend) + Vercel (website) The Voice Pipeline This is the core of TAMSIV. The flow looks like this: User speaks → PCM audio chunks via WebSocket → Deepgram (real-time STT with VAD) → LLM via OpenRouter (context-aware) → Function calling (create_task, create_memo, create_ev
Continue reading on Dev.to React
Opens in a new tab


