
Same Agents, Different Minds — What 180 Configurations Proved About AI Environment Design
Google tested 180 agent configurations. Same foundation models. Same tasks. Same tools. The only variable was how the agents talked to each other. Independent agents — working in parallel, no communication — amplified errors 17.2 times. Give the same agents a centralized hub-and-spoke topology, and error amplification dropped to 4.4 times. Same intelligence. Same training. A 3.9x difference in error rate, explained entirely by communication structure. This isn't a story about better prompts or smarter models. It's a story about environment. And it follows directly from a claim I made in Part 1 of this series : the interface isn't plumbing between the AI and the world. It's a mold that shapes what the AI becomes. Part 1 argued this through cases — a developer who felt hollowed out by AI, a drawing tool whose constraints generated a creative community, a teaching pipeline where replacing checklists with questions changed the model's cognitive depth without changing the model. The claim w
Continue reading on Dev.to
Opens in a new tab

