
Why I Built Selectools (and What I Learned Along the Way)
Every AI agent framework makes the same promise: "connect your LLM to tools and go." Then you start building. You discover that LangChain needs 5 packages to do what should take 1. That LCEL's | operator hides a Runnable protocol that breaks your debugger. That LangSmith costs money to see what your own code is doing. That when your agent graph pauses for human input, LangGraph restarts the entire node from scratch. I hit every one of these at work. We were building AI agents for real users, not demos, not prototypes, but production systems handling actual customer requests. The existing frameworks weren't built for this. So I built selectools. What I actually needed Tool calling that just works. Define a function, the LLM calls it. No adapter layers, no schema gymnastics. Works the same across OpenAI, Anthropic, Gemini, and Ollama. Traces without a SaaS. Every run() should tell me exactly what happened, which tools were called, why, how long each step took, what it cost. Not "sign up
Continue reading on Dev.to
Opens in a new tab


