
We migrated 3 EKS clusters on AWS to Karpenter. Here's what nobody warns you about.
Most teams switch for the cost savings. They stay for the headaches they didn't see coming. Before Karpenter, we were running fixed managed node groups — over-provisioned, expensive, and sized by gut feel. Cluster Autoscaler helped, but it scaled based on what you pre-configured, not what your workloads actually needed. You were still making the sizing decisions. You were just automating the slow parts. Karpenter changes the contract entirely. It provisions exactly what your pods ask for. Which sounds great - until you realise your pods have been lying about what they need. Here's what actually happened, and what to do instead. The bootstrap deadlock nobody mentions Karpenter can't run on nodes it provisions. It has a nodeAffinity that actively avoids them.If you remove your managed node group too early, Karpenter has nowhere to live. Fix: Always keep a small managed node group tainted with karpenter.sh/controller. It runs Karpenter and CoreDNS. Nothing else. Karpenter only sees reques
Continue reading on Dev.to DevOps
Opens in a new tab



