Skip to Content
Kubernetes & Container 6 min. read

Zero-Downtime Deployments with Kubernetes: Configuring Rolling Updates Correctly

A misconfigured rolling update can still cause downtime even with Kubernetes. We show the most common mistakes and how to avoid them.

devRocks Team · 05. March 2026 ·
Kubernetes Deployment Zero-Downtime DevOps
Zero-Downtime Deployments with Kubernetes: Configuring Rolling Updates Correctly

Why Rolling Updates Alone Are Not Enough

Kubernetes rolling updates replace pods incrementally — but without proper configuration of readiness probes, graceful shutdown, and pre-stop hooks, outages still occur.

The Three Pillars of Zero Downtime

  • Readiness Probes: Kubernetes needs to know when a new pod is ready to receive traffic. Without a readiness probe, traffic is sent to pods that are still starting up.
  • Graceful Shutdown: Old pods must complete in-flight requests before being terminated. The SIGTERM signal must be handled correctly by your application.
  • Pre-Stop Hooks: A short sleep (5-10 seconds) in the pre-stop hook gives the load balancer time to remove the pod from rotation before it shuts down.

Deployment Strategy

  • maxSurge: Set to 25-50% — allows Kubernetes to start new pods before removing old ones.
  • maxUnavailable: Set to 0 — ensures that the desired number of pods is always available.
  • minReadySeconds: Set to 10-30 seconds — prevents overly rapid rollouts in the case of creeping errors.

Test Your Setup

We recommend testing deployments regularly under load. Tools like k6 or Locust can continuously send requests during a deployment — every 5xx error reveals a gap in your zero-downtime configuration.

Questions About This Topic?

We are happy to advise you on the technologies and solutions described in this article.

Get in Touch

Weitere Artikel aus „Kubernetes & Container“