Back to articles
OOMKilled in Kubernetes: Why Your Pods Die Without Warning (and How to Fix It)
How-ToDevOps

OOMKilled in Kubernetes: Why Your Pods Die Without Warning (and How to Fix It)

via Dev.toSumit Purandare

😨 The Silent Killer in Kubernetes Your pod is running fine… Everything looks normal… And suddenly — it restarts. No clear error. No obvious logs. Just a restart. If this has happened to you, you’ve likely encountered: 👉 OOMKilled 🤔 What is OOMKilled? OOMKilled stands for: Out Of Memory Killed In Kubernetes, when a container exceeds its memory limit, the system forcefully terminates it. There is: • ❌ No graceful shutdown • ❌ No detailed error message • ❌ Sometimes no helpful logs ⚠️ Why Does OOMKilled Happen? Here are the most common reasons: 1. Memory Limits Are Too Low Your container simply doesn’t have enough memory allocated. 2. Memory Leaks in Application Your app keeps consuming memory over time until it crashes. 3. Traffic Spikes / Batch Jobs Sudden increase in load → memory usage spikes → container killed. 4. JVM / Python Apps Some runtimes: • Don’t respect container limits well • Need explicit tuning 🔍 How to Detect OOMKilled Run: kubectl describe pod <pod-name> Look for: Last

Continue reading on Dev.to

Opens in a new tab

Read Full Article
2 views

Related Articles