
Replace Your Vector Pipeline with bash
Most knowledge agents follow the same playbook: pick a vector database, build a chunking pipeline, choose an embedding model, tune retrieval parameters. Weeks later, your agent confidently returns the wrong answer and you can't figure out why. We took a different path. Instead of embeddings, we gave the agent a filesystem and bash . It searches your content using grep , find , and cat inside isolated sandboxes. No vector DB, no chunking, no embedding model. The results were striking. Applying this pattern to a sales call summarization agent cut costs from ~$1.00 to ~$0.25 per call, and output quality actually improved. The core insight is simple: LLMs have been trained on enormous amounts of code. They already know how to navigate directories and grep through files. You're not teaching the model a new skill. You're leveraging the one it's best at. Debugging gets a lot easier too. With vectors, a bad answer means trying to understand why one chunk scored 0.82 while the correct one score
Continue reading on Dev.to JavaScript
Opens in a new tab




