Back to articles
Don’t “Execute” the LLM: Typed Actions + Verifiers for Safe Business Agents

Don’t “Execute” the LLM: Typed Actions + Verifiers for Safe Business Agents

via Dev.tokanaria007

More AI “agents” now look like they work in real systems. But what actually makes them work is not just model capability—it’s a deterministic verifier + operations that decides what’s allowed to run. In a previous post I used a refund example. In this one I’ll intentionally pick scarier scenarios—ones that make senior engineers’ blood run cold—and show a minimal design pattern: Propose (probability) → Verify (determinism) → Execute (authority + audit) …so you never “execute the LLM.” This article is a minimal pattern . It’s not a complete product spec. 0) Why this design exists (the premise) LLMs are probabilistic. Output variance itself isn’t the problem. The real problem is executing wobbly output directly . So split the roles: LLM: propose a plan Verifier: deterministically accept/reject (and optionally normalize the plan) Executor: runs only verified Typed Actions (dry-run → approval → production) The closer you get to “execute free text,” the more accidents you’ll have. The more y

Continue reading on Dev.to

Opens in a new tab

Read Full Article
8 views

Related Articles