
New IteraTools Endpoint: POST /crawl — BFS Web Crawler for AI Agents
POST /crawl — BFS Web Crawler We just shipped POST /crawl to IteraTools — a breadth-first web crawler that extracts structured content from multiple pages in a single API call. This is particularly useful for AI agents that need to digest entire documentation sites, product catalogs, or any multi-page website. What it does Starting from a seed URL, it performs BFS traversal, visiting each page and returning: title — the page title markdown — full page content as clean markdown (up to 20,000 chars/page) links — outbound links found on the page By default it stays on the same domain, so you won't accidentally crawl the entire internet. Quick example curl -X POST https://api.iteratools.com/crawl \ -H "Authorization: Bearer YOUR_KEY" \ -H "Content-Type: application/json" \ -d '{"url":"https://docs.example.com","max_pages":10}' Response: { "ok" : true , "data" : { "pages" : [ { "url" : "https://docs.example.com" , "title" : "Documentation Home" , "markdown" : "# Getting Started \n\n Welcome
Continue reading on Dev.to Webdev
Opens in a new tab


