Back to articles
I formalized the human in the AI energy equation

I formalized the human in the AI energy equation

via Dev.toFelipe Cardoso

Felipe Cardoso, April 2026 Over the past few months I've been digging into something that's been bothering me since I started running LLMs locally on my PC. The question was simple: why does a 3 billion parameter model fail so much when running on its own, but when I sit next to it and guide it through the task, breaking things down, checking each output, rewording what failed, the results improve dramatically? Anyone who's used Copilot, Cursor, or any local LLM has noticed this. But I wanted to go beyond just "noticing it." I wanted to know if you could measure it. And if you could put it in an equation. You can. And I wrote a paper about it. The problem nobody connected The academic literature on LLMs treats inference as an autonomous process. The model receives input, generates output, someone measures joules per token. If it got it wrong, it regenerates. That's the cost. When researchers study "human-in-the-loop", they focus on quality. The human as a corrector that improves accura

Continue reading on Dev.to

Opens in a new tab

Read Full Article
0 views

Related Articles