Back to blog

2025-06-15 · 3 min read

My Kubernetes Cluster Looked Healthy. Production Wasn't.

KubernetesObservabilityMonitoringProductionDevOps

At first glance, everything looked fine.

Dashboards were green. Pods were running. CPU usage was low. Memory was stable. No crashes. No restarts. No alerts.

And yet, production felt broken.

This isn't a war story from a production outage. This is something I built on purpose.

I work with on-prem systems day to day, so I keep a small Kubernetes lab running to stay sharp. Not for tutorials. Not for certifications. For reproducing the kind of production pain that metrics don't show.

The Scenario I Built on Purpose

The goal was simple:

  • Increase latency
  • Keep CPU low
  • Avoid errors
  • Avoid crashes
  • Keep every pod in Running state

I wanted to create pain without pressure.

The service would sometimes respond in milliseconds, sometimes in seconds. No pattern. No warning. From the user's perspective, the system felt unreliable. From Kubernetes' perspective, everything was perfectly fine.

The First Things I Checked

Like most people, I started with the basics:

kubectl get pods
kubectl top pods

What I saw looked reassuring — CPU was low, memory usage was low, all pods were running, replica counts were stable.

My conclusion was quick. And wrong.

"The cluster is healthy."

That was my first mistake.

Why CPU Fooled Me

The latency I introduced had nothing to do with CPU. No heavy computation. No resource exhaustion. No obvious bottleneck. Just waiting. Blocking. Slow responses.

From Kubernetes' point of view, there was nothing to react to:

  • No CPU pressure
  • No memory pressure
  • No reason to intervene

The cluster wasn't lying to me. It was answering exactly the question I was asking.

The Second Illusion: Multiple Replicas

I also had more than one replica running. That should have helped, right?

I assumed traffic would naturally balance out and the system would feel stable. What actually happened:

  • Some pods were fast
  • Some pods were slow
  • Requests randomly landed on the slow ones

The averages looked fine. User experience did not. Nothing was technically "down", but nothing felt reliable either.

What the Real Problem Was

It wasn't CPU. It wasn't memory. It wasn't pod health.

The real issue was time.

  • Latency variance
  • Tail latency
  • Unpredictable response times

None of these showed up in the metrics I was watching.

The Core Lesson

Kubernetes optimizes for resource pressure. Users experience time.

A green cluster does not guarantee a healthy production experience.

What I Took Away

  • "Running" does not mean "working well" — a pod can be in Running state and still serve terrible responses
  • Low CPU does not mean good user experience — latency problems don't need compute pressure
  • Averages hide real pain — P50 can look great while P99 is destroying the experience
  • Latency is a first-class signal — it belongs on your primary dashboard, not buried in traces
  • Looking healthy is not the same as being healthy — this applies to clusters and to observability strategies

What Comes Next

The fix didn't involve refactoring the slow service. No performance tuning. No "just scale it."

Only platform decisions: readiness probes, traffic isolation, and understanding that protecting users is not the same thing as fixing latency.

Once you stop trusting green dashboards, that's when the real work starts.


Keep Reading

2026-02-17 · 5 min read

Claude Code Was Hallucinating. The Fix Was a Progress Bar.

2025-05-20 · 5 min read

Automating Kubernetes Deployments with ArgoCD Image Updater on GKE


Back to blog

Navigate

HomeBlog

Connect

© 2026 Okay Kacar. All rights reserved.