
I ran AI brand checks on 160 companies — here's what the data actually shows
Over the past week, I ran brand checks on 160 companies across ChatGPT, Gemini, Perplexity, and Claude as part of building GEO Brand Monitor . The results surprised me — not because AI search is powerful, but because how differently the four engines score the same brand . Here's the data. The scoring method Each engine gets queried with prompts like "what can you tell me about [Brand]" and "is [Brand] trustworthy." The response is scored 0–100 based on sentiment, mention frequency, and recommendation likelihood. 0 = negative/absent, 100 = confidently recommended. You can run any brand yourself at geo.atlas1m.com — free, no signup. Key findings 1. Engine gaps are the real story A brand can score 100 on ChatGPT and 12 on Claude at the same time . That's not noise — it reflects how differently each engine was trained and what data it prioritizes. Top gaps found: Brand ChatGPT Gemini Perplexity Claude Cash App 88 100 100 12 Tripadvisor 100 17 100 100 West Elm 100 100 17 100 Hims 83 17 83 8
Continue reading on Dev.to Webdev
Opens in a new tab



