
Building an agent harness with AI
In the spirit of Feynman's "What I cannot create, I do not understand," I set out to build a CLI agent harness from scratch. I was curious about how tools like Claude Code and Codex actually work under the hood. The result is ra ( https://github.com/chinmaymk/ra)- a config-driven agent runtime where the config is the agent. What this looks like in practice Before I get into how things work under the hood, here's what ra actually feels like to use: # One-shot - streams to stdout and exits ra "Summarize the key points of this file" --file report.pdf # Pipe it, Unix-style cat error.log | ra "Explain this error" # Switch providers with a flag ra --provider openai --model gpt-4.1 "Explain this error" ra --provider ollama --model llama3 "Write a haiku" ra # Interactive REPL ra --http # HTTP API for your app ra --cron # Scheduled agent runs # MCP server - so Cursor or Claude Desktop can use ra as a tool ra --mcp-stdio # Observability dashboard, see logs, traces, messages, config etc ra --insp
Continue reading on Dev.to
Opens in a new tab




