When Kubernetes Says "All Green" But Your System Is Already Failing
It's not a theoretical scenario. The cluster health checks all come back "green." Node status shows Ready across the board. Your monitoring stack reports nominal CPU and memory utilization. And somewhere in a utilities namespace, a container has restarted 24,069 times over the past 68 days — every five minutes, quietly, without triggering a single critical alert. That number — 24,069 restarts — came from a real non-production cluster scan run last week, an open-source Kubernetes scanner that operates with read-only permissions — it can see the state of the cluster, but it cannot and did not change a single thing. The failures we found were entirely of the cluster's own making. The namespace it lived in showed green in every dashboard the team monitored. No alert had fired. No ticket had been created. The workload had essentially been broken for over two months, and the cluster's observability layer had communicated exactly nothing about it.
Continue reading on DZone
Opens in a new tab

![[MM’s] Boot Notes — The Day Zero Blueprint — Test Smarter on Day One](/_next/image?url=https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1368%2F1*AvVpFzkFJBm-xns4niPLAA.png&w=1200&q=75)

