Back to articles
I built an npm middleware that scores your LLM prompts before they hit your agent workflow

I built an npm middleware that scores your LLM prompts before they hit your agent workflow

via Dev.toOnChainAIIntel

The problem with most LLM agent workflows is that nobody is checking the quality of the prompts going in. Garbage in, garbage out but at scale, with agents firing hundreds of prompts per day, the garbage compounds fast. I built x402-pqs to fix this. It's an Express middleware that intercepts prompts before they hit any LLM endpoint, scores them for quality, and adds the score to the request headers. Install npm install x402-pqs Usage const express = require ( " express " ); const { pqsMiddleware } = require ( " x402-pqs " ); const app = express (); app . use ( express . json ()); app . use ( pqsMiddleware ({ threshold : 10 , // warn if prompt scores below 10/40 vertical : " crypto " , // scoring context onLowScore : " warn " , // warn | block | ignore })); app . post ( " /api/chat " , ( req , res ) => { console . log ( " Prompt score: " , req . pqs . score , req . pqs . grade ); res . json ({ message : " ok " }); }); Every request gets these headers added automatically: X-PQS-Score —>

Continue reading on Dev.to

Opens in a new tab

Read Full Article
3 views

Related Articles