
I built a DSL for declaring AI safety constraints — papa-lang v0.2
Show HN: papa-lang — declarative DSL for AI safety configuration Title: Show HN: papa-lang – a DSL where you declare AI hallucination thresholds before deployment Post body (copy-paste to news.ycombinator.com/submit) I built a small declarative language for configuring AI agent safety constraints. The problem: every team writing multi-agent systems invents their own ad-hoc YAML/Python for expressing things like "block this response if hallucination risk > 20%". There's no standard format for it. papa-lang lets you write this instead: agent analyst { model: claude-3-sonnet guard: strict hrs_threshold: 0.10 } swarm medical_team { agents: [analyst] consensus: 4/7 pii: filter hrs_max: 0.20 } pipeline main { route: orchestrator module: papa-life } Then compile to Python or TypeScript: bash papa compile medical.papa --target python → medical_compiled.py (ready to run) The core concept is HRS (Hallucination Risk Score) — a float in [0.0, 1.0]. Each agent declares its threshold. The runtime bl
Continue reading on Dev.to Python
Opens in a new tab




