
I built my own website crawler because SEO tools were too restrictive
The problem I run nerdyelectronics.com , a tech blog. Every time I published a batch of posts or changed my site structure, I'd go through the same painful cycle: Make changes Wait for Google Search Console to recrawl (days) Discover broken links and issues... days later Fix them Wait again I tried the usual tools. Screaming Frog is powerful but feels like enterprise software for a simple job. SaaS crawlers charge per page or per scan. I just wanted to know: what's broken on my site right now? So I built CrawlyCat . What it does CrawlyCat crawls your website and reports: HTTP 4xx/5xx errors Redirect chains Missing or bad <title> and meta descriptions Missing or duplicate <h1> tags Internal broken links External link inventory Nothing revolutionary — but it runs locally , has no limits , and takes about 30 seconds to set up. The architecture Two crawling modes: Browser mode (default): Uses Playwright with headless Chromium. This handles JavaScript-rendered pages and even bypasses Cloudf
Continue reading on Dev.to Webdev
Opens in a new tab



