Back to articles
I needed to know if the cheaper model was good enough. So I built an LLM-as-a-Judge pipeline

I needed to know if the cheaper model was good enough. So I built an LLM-as-a-Judge pipeline

via Dev.toarchminor

Benchmarks are useful, but they don't really tell me whether a prompt change or cheaper model is good enough for my own workflow. I kept running into that, so I ended up building a config-driven eval pipeline: run test cases, check format/schema, use a separate LLM as judge, then generate comparison reports. What it does 3-stage pipeline: Inference — Run your test cases against candidate models (format and schema validation runs automatically) Judge — A separate LLM scores outputs on 9 metrics (accuracy, faithfulness, completeness, etc.) Compare — Aggregate scores into a comparison report (JSON + Markdown) Key design choices: 3-layer judge architecture — Format, content, and expression are evaluated in separate LLM calls with no shared context. This prevents a formatting issue from biasing content scores. Pairwise + absolute + hybrid modes — Compare two models head-to-head, score them independently, or both. Majority vote aggregation — Run the judge multiple times and take the majority

Continue reading on Dev.to

Opens in a new tab

Read Full Article
2 views

Related Articles