
The One Parameter That Broke Every GPT-5 Call
You upgrade your model to GPT-5.2. Every single request returns a 400 error. Your agent retries, hits the fallback chain, and eventually times out. The logs show: { "error" : { "message" : "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead." , "type" : "invalid_request_error" } } One renamed parameter. 100% failure rate. OpenClaw #62130 tells the story. What Happened OpenAI's GPT-5.x family dropped support for the max_tokens parameter. The replacement, max_completion_tokens , has been available since the o1 model series — that's months of overlap where both worked on older models. But GPT-5.x drew the line: old parameter name, hard 400 rejection. OpenClaw, like many agent frameworks, had max_tokens hardcoded deep in its OpenAI provider layer. It worked perfectly for GPT-4o, GPT-4.5, and everything before. The day someone pointed their config at gpt-5.2 , every request failed. Why This Is Worse Than It Looks A missing feature is an
Continue reading on Dev.to
Opens in a new tab

