
I Audited 214 Claude Code Skills — 73% Were Silently Broken
I ran a single command against my Claude Code skills directory last week. Out of 47 skills I'd written over three months, 31 had structural problems that degraded how often Claude actually triggered them . Two had never fired at all. Their descriptions were so vague that Claude's skill-matching logic couldn't determine when to use them. The fix took 20 minutes once I knew what was wrong. Here's the full setup. TL;DR: npx pulser@latest audits your Claude Code skills against Anthropic's documented principles — frontmatter structure, description specificity, body quality, reference coverage. It scores each skill 0–100, flags exact issues, and generates fix commands. 73% of 214 community skills I tested scored below 60. Free, open-source, MIT-licensed, runs in under 5 seconds. The Problem: Skills That Silently Fail Claude Code skills fail silently. A bad SKILL.md doesn't throw an error — it just never gets invoked. The description field in your frontmatter is what Claude uses at runtime to
Continue reading on Dev.to
Opens in a new tab




