
I built a simulator that runs AI regulations through 10,000 agents and shows you how many comply, relocate, and who evades
I got tired of AI policy debates being purely theoretical. Everyone argues about what a regulation should do. Nobody shows what companies will do. So I built SwarmCast. You upload a document — a policy draft, a news article, a hypothetical. It parses it and runs a population of heterogeneous agents (companies, startups, regulators, investors) through it across 15 jurisdictions. Compliance curves, evasion patterns, jurisdiction flight, lobbying coalitions — emerging from individual decisions, not hand-coded outcomes. Two things I cared about: Epistemic honesty. Every output is tagged GROUNDED, DIRECTIONAL, or ASSUMED. If a number traces to calibrated empirical data, it says so. If it's a structural assumption, it says that too. ASSUMED outputs are visually dimmed. Most simulation tools present all their numbers with equal confidence. This one doesn't. Adversarial injection. Push a belief into a fraction of the population mid-run and measure how far it spreads and how much it bends aggre
Continue reading on Dev.to
Opens in a new tab




