
Autonomous AI in Legal Limbo
When Anthropic released Claude's “computer use” feature in October 2024, the AI could suddenly navigate entire computers by interpreting screen content and simulating keyboard and mouse input. OpenAI followed in January 2025 with Operator, powered by its Computer-Using Agent model. Google deployed Gemini 2.0 with Astra for low-latency multimodal perception. The age of agentic AI, systems capable of autonomous decision-making without constant human oversight, had arrived. So had the regulatory panic. Across government offices in Brussels, London, Washington, and beyond, policymakers face an uncomfortable truth: their legal frameworks were built for software that follows instructions, not AI that makes its own choices. When an autonomous agent can book flights, execute financial transactions, manage customer relationships, or even write and deploy code independently, who bears responsibility when things go catastrophically wrong? The answer, frustratingly, depends on which jurisdiction y
Continue reading on Dev.to
Opens in a new tab



