
Give Your AI Coding Agent a Docstring Quality Tool (MCP Setup for VS Code, Cursor, and Claude Code)
Your AI coding agent can read your code, run your tests, and search your repo. But can it check whether your docstrings actually match what the code does? Research shows incorrect documentation drops LLM task success by 22.6 percentage points . Missing docs are annoying. Wrong docs are toxic — they create false confidence in generated code. docvet catches these gaps: 19 rules that check whether your docstrings actually match what the code does. Since v1.8, it ships an MCP server — meaning any MCP-aware editor can give its AI agent direct, programmatic access to those checks. What Your Agent Gets Two tools appear in the agent's toolbox: docvet_check — Run checks on any Python file or directory. Returns structured JSON: { "findings" : [ { "file" : "src/pipeline/extract.py" , "line" : 42 , "symbol" : "extract_text" , "rule" : "missing-raises" , "message" : "Function 'extract_text' raises ValueError but has no Raises section" , "category" : "required" } ], "summary" : { "total" : 3 , "by_c
Continue reading on Dev.to Python
Opens in a new tab




