
Running code quality pipelines during AI coding workflows
I've been experimenting a lot with AI-assisted coding tools like Claude Code and Cursor. One thing I noticed is that code quality checks usually run only later in CI. Linting, type checks, tests, security scans, and coverage often happen after the code is already written. That workflow works for humans, but it feels awkward when AI is generating code. So I started experimenting with running the entire quality pipeline locally during development and exposing the results in a way that AI tools can use to iterate on fixes. That experiment became a small project called LucidShark. What LucidShark does LucidShark is a local-first CLI code quality pipeline designed to work well with AI coding workflows. Key ideas: Runs entirely from the CLI Local-first (no SaaS or external service) Configuration as code via a repo config file Integrates with Claude Code via MCP Generates a quality overview that can be committed to git It orchestrates common quality checks such as: linting type checking tests
Continue reading on Dev.to
Opens in a new tab




