
Building "Prison Break AI": Local-first Agent Planning + LLM Fallback
Large language models are powerful planners, but calling them every physics tick is expensive and fragile. This project demonstrates a hybrid pattern: prefer cheap, deterministic algorithms (A*) for routine planning, and use one-time LLM calls as a background optimization. System overview Frontend: React + TypeScript, Vite Physics: Matter.js Pathfinding: Grid A* (in src/physics/pathfinder.ts ) Maze generator: recursive backtracker (in src/physics/maze.ts ) Optional LLM planner: proxied local Ollama runtime via /api/ollama (client in src/ai/client.ts ) Architecture diagram Key patterns and lessons Staged startup UX: spawn agents visually, compute plans, show them briefly, then start movement — this avoids the app feeling stuck while planning. Reduce LLM calls: shift high-frequency decision-making to deterministic algorithms; only call LLM for one-time planning or recovery. Robust LLM client: the app gracefully handles multiple response shapes and caches availability to reduce noise when
Continue reading on Dev.to Tutorial
Opens in a new tab


