
We Scored 2/6 on Our Own Agent-Readiness Scanner. Here's How We Fixed It.
We build Strale, an API that gives AI agents access to business data capabilities — company lookups, compliance checks, financial data, that kind of thing. Agents call strale.do() at runtime and get structured, quality-scored results back. A few weeks ago we asked ourselves a question we probably should have asked sooner: if an agent tried to discover and use our own API, what would it actually experience? We didn't know. So we built a tool to find out. The tool we built for ourselves We started with a simple internal checker — a script that hit our API the way an agent would and reported what it found. Could it find our llms.txt? Could it parse our OpenAPI spec? Did our MCP endpoint actually respond? Were our error messages machine-readable or just HTML 404 pages? The script grew. We added checks for structured data, robots.txt crawler policies, authentication documentation, rate limit headers, content negotiation, schema drift between our spec and our live responses, machine-readable
Continue reading on Dev.to Webdev
Opens in a new tab




