
How AI Detection Actually Works (From a Developer Who Built One)
I built an AI text detector, so I've spent more time than I'd like staring at the mechanics of how these tools work and where they break down. The marketing promises confidence scores and definitive answers. The reality is messier. Perplexity: The Core Signal Perplexity measures how "surprised" a language model is by a piece of text. Given a sequence of words, how predictable is the next word at each step? When a language model generates text, it picks tokens that have high probability given the preceding context. That's literally what it's optimized to do. The result is text with low perplexity: each word follows naturally and predictably from the ones before it. Human writing tends to have higher perplexity. We make unexpected word choices. We use idioms that don't follow statistical patterns. We start sentences in ways that a probability distribution wouldn't favor. We throw in a technical term, then follow it with slang in the same paragraph. A perplexity score is computed by runni
Continue reading on Dev.to Webdev
Opens in a new tab




