Back to articles
How to Stop browser-use From Choking When Your Primary LLM Hits a 429

How to Stop browser-use From Choking When Your Primary LLM Hits a 429

via Dev.to PythonNathaniel Hamlett

How to Stop browser-use From Choking When Your Primary LLM Hits a 429 If you're running browser-use in production — actual automated form submissions, multi-step web agents, anything that has to keep working while you're asleep — you've hit this wall: your LLM returns a 429, and the whole agent dies. Most tutorials show you how to get browser-use working. Nobody talks about keeping it working when your free quota runs out at 2am and there are 40 jobs left in the queue. Here's the actual fix. Why the Default Setup Is Fragile Out of the box, browser-use scripts look like this: from browser_use.llm.openai.chat import ChatOpenAI llm = ChatOpenAI ( model = ' anthropic/claude-sonnet-4 ' , base_url = ' https://openrouter.ai/api/v1 ' , api_key = os . environ [ ' OPENROUTER_API_KEY ' ], ) agent = Agent ( task = task , llm = llm , browser = browser ) await agent . run () This hardcodes one LLM. When that LLM returns a 429 (rate limit), 503 (overloaded), or a quota error, the agent crashes. Your

Continue reading on Dev.to Python

Opens in a new tab

Read Full Article
7 views

Related Articles