Learn to configure PodDisruptionBudget resources in Kubernetes to maintain application availability during voluntary disruptions. This tutorial covers creating disruption budgets, implementing policies for different workload types, and monitoring disruption events with kubectl.
Prerequisites
- Running Kubernetes cluster with admin access
- kubectl configured
- Applications deployed to protect
- Basic understanding of Kubernetes workloads
What this solves
Pod Disruption Budgets (PDBs) protect your Kubernetes applications from voluntary disruptions like node maintenance, cluster upgrades, or pod evictions. They define the minimum number or percentage of pods that must remain available during planned maintenance operations. Without PDBs, kubectl drain or cluster autoscaling can take down all replicas of a service simultaneously, causing downtime.
Prerequisites
You need a running Kubernetes cluster with kubectl configured and cluster admin permissions. This tutorial assumes you have applications deployed that you want to protect with disruption budgets. If you need help setting up a cluster, see our Kubernetes cluster installation guide.
Understanding pod disruption budgets
Check current cluster disruptions
First, examine any existing pod disruption budgets in your cluster to understand the current protection level.
kubectl get pdb --all-namespaces
kubectl describe pdb --all-namespaces
Examine workload replica counts
Identify your critical deployments and their replica counts to plan appropriate disruption budgets.
kubectl get deployments --all-namespaces -o wide
kubectl get replicasets --all-namespaces -o wide
Creating basic pod disruption budgets
Create a minimum pods disruption budget
This PDB ensures at least 2 pods remain available during disruptions for a web application with multiple replicas.
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: web-app-pdb
namespace: production
spec:
minAvailable: 2
selector:
matchLabels:
app: web-app
tier: frontend
Apply the disruption budget
Deploy the PDB and verify it targets the correct pods in your application.
kubectl apply -f web-app-pdb.yaml
kubectl get pdb web-app-pdb -n production
kubectl describe pdb web-app-pdb -n production
Create a percentage-based disruption budget
This approach scales better with dynamic replica counts and allows up to 25% of pods to be unavailable.
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: api-service-pdb
namespace: production
spec:
maxUnavailable: 25%
selector:
matchLabels:
app: api-service
component: backend
Implementing workload-specific policies
Database cluster protection
Database clusters require stricter availability guarantees to maintain quorum and prevent data inconsistency.
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: postgres-cluster-pdb
namespace: database
spec:
minAvailable: 67%
selector:
matchLabels:
app: postgresql
role: primary
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: redis-cluster-pdb
namespace: database
spec:
minAvailable: 2
selector:
matchLabels:
app: redis
role: master
Stateful application protection
StatefulSets need careful disruption handling to maintain pod identity and ordered scaling behavior.
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: elasticsearch-pdb
namespace: logging
spec:
maxUnavailable: 1
selector:
matchLabels:
app: elasticsearch
component: data
Critical system services
DNS, ingress controllers, and monitoring services require high availability to maintain cluster functionality.
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: coredns-pdb
namespace: kube-system
spec:
minAvailable: 1
selector:
matchLabels:
k8s-app: kube-dns
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: nginx-ingress-pdb
namespace: ingress-nginx
spec:
minAvailable: 75%
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/component: controller
Advanced disruption budget configurations
Multi-tier application protection
Complex applications need coordinated disruption policies across different tiers to maintain end-to-end availability.
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: frontend-pdb
namespace: ecommerce
spec:
minAvailable: 50%
selector:
matchLabels:
app: ecommerce
tier: frontend
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: backend-pdb
namespace: ecommerce
spec:
maxUnavailable: 33%
selector:
matchLabels:
app: ecommerce
tier: backend
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: cache-pdb
namespace: ecommerce
spec:
minAvailable: 1
selector:
matchLabels:
app: ecommerce
component: redis-cache
Apply all disruption budgets
Deploy the complete set of disruption budgets and verify they cover your critical workloads.
kubectl apply -f database-pdb.yaml
kubectl apply -f elasticsearch-pdb.yaml
kubectl apply -f system-services-pdb.yaml
kubectl apply -f multi-tier-pdb.yaml
Monitoring and troubleshooting disruption budgets
Monitor PDB status and events
Check the current status of all disruption budgets and identify any that are preventing necessary maintenance.
kubectl get pdb --all-namespaces -o wide
kubectl get events --field-selector reason=EvictionBlocked --all-namespaces
kubectl describe pdb --all-namespaces | grep -A 5 -B 5 "Status:"
Test disruption scenarios
Simulate pod evictions to verify your disruption budgets work correctly before actual maintenance.
# Test eviction on a protected pod
kubectl get pods -n production -l app=web-app
kubectl delete pod web-app-7d4b8c6f9d-abc123 -n production
Check if eviction is blocked
kubectl get events -n production --sort-by='.lastTimestamp' | tail -10
Validate pod selector matching
Ensure your disruption budgets select the correct pods by checking label matches and target counts.
# Check which pods match a PDB selector
kubectl get pods -n production -l app=web-app,tier=frontend
Verify PDB targets match expected pod count
kubectl describe pdb web-app-pdb -n production | grep -E "(Total|Current|Desired)"
List all pods not covered by any PDB
kubectl get pods --all-namespaces --show-labels | grep -v "disruption-budget"
Policy enforcement and automation
Create admission controller policy
Use OPA Gatekeeper to enforce that critical deployments must have associated disruption budgets.
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: requirepodisruptionbudget
spec:
crd:
spec:
names:
kind: RequirePodDisruptionBudget
validation:
properties:
exemptImages:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package requirepodisruptionbudget
violation[{"msg": msg}] {
input.review.object.kind == "Deployment"
replicas := input.review.object.spec.replicas
replicas > 1
not pdb_exists
msg := "Deployments with > 1 replica must have a PodDisruptionBudget"
}
pdb_exists {
pdb := data.inventory.cluster["policy/v1"].PodDisruptionBudget[_]
pdb.spec.selector.matchLabels.app == input.review.object.spec.template.metadata.labels.app
}
Create monitoring script
Set up automated monitoring to alert when disruption budgets block necessary maintenance operations.
#!/bin/bash
Monitor PDB violations and blocked evictions
echo "=== Pod Disruption Budget Status ==="
kubectl get pdb --all-namespaces -o custom-columns='NAMESPACE:.metadata.namespace,NAME:.metadata.name,MIN AVAILABLE:.spec.minAvailable,MAX UNAVAILABLE:.spec.maxUnavailable,CURRENT:.status.currentHealthy,DESIRED:.status.desiredHealthy'
echo -e "\n=== Recent Eviction Blocks ==="
kubectl get events --all-namespaces --field-selector reason=EvictionBlocked --sort-by='.lastTimestamp' | tail -10
echo -e "\n=== PDBs Preventing Maintenance ==="
for pdb in $(kubectl get pdb --all-namespaces -o jsonpath='{range .items[*]}{.metadata.namespace}{"/"}{.metadata.name}{"\n"}{end}'); do
namespace=$(echo $pdb | cut -d'/' -f1)
name=$(echo $pdb | cut -d'/' -f2)
current=$(kubectl get pdb $name -n $namespace -o jsonpath='{.status.currentHealthy}')
desired=$(kubectl get pdb $name -n $namespace -o jsonpath='{.status.desiredHealthy}')
if [ "$current" -le "$desired" ]; then
echo "WARNING: $pdb has $current/$desired healthy pods - may block evictions"
fi
done
Make the script executable and test
Set up the monitoring script and integrate it with your monitoring system for regular health checks.
chmod +x monitor-pdb.sh
./monitor-pdb.sh
Schedule regular monitoring with cron
echo "/15 * /path/to/monitor-pdb.sh" | crontab -
Integration with cluster operations
Safe node drainage with PDBs
Test how pod disruption budgets interact with node maintenance operations to ensure smooth cluster operations.
# Check node readiness before drainage
kubectl get nodes
kubectl describe node worker-node-1 | grep -A 10 "Conditions:"
Safely drain node respecting PDBs
kubectl drain worker-node-1 --ignore-daemonsets --delete-emptydir-data --timeout=300s
Monitor eviction progress
watch kubectl get pods --all-namespaces | grep Evicted
Integration with cluster autoscaler
Configure cluster autoscaler to respect pod disruption budgets during scale-down operations.
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-autoscaler-status
namespace: kube-system
data:
nodes.max: "100"
nodes.min: "3"
scale-down-enabled: "true"
scale-down-delay-after-add: "10m"
scale-down-unneeded-time: "10m"
scale-down-utilization-threshold: "0.5"
respect-pdb: "true"
Verify your setup
# Check all PDBs are created and healthy
kubectl get pdb --all-namespaces
Verify PDB selector coverage
kubectl describe pdb --all-namespaces | grep -E "(Name|Namespace|Min available|Max unavailable|Current|Desired)"
Test eviction protection
kubectl delete pod $(kubectl get pods -n production -l app=web-app -o jsonpath='{.items[0].metadata.name}') -n production
Check monitoring script output
./monitor-pdb.sh
Validate policy enforcement (if using Gatekeeper)
kubectl get constraints --all-namespaces
Common issues
| Symptom | Cause | Fix |
|---|---|---|
| PDB blocks all evictions | minAvailable equals current pod count | Reduce minAvailable or increase replicas: kubectl scale deployment web-app --replicas=4 |
| PDB selects no pods | Label selector mismatch | Check pod labels: kubectl get pods --show-labels and fix PDB selector |
| Node drain hangs indefinitely | PDB prevents required evictions | Temporarily delete problematic PDB: kubectl delete pdb problematic-pdb |
| Multiple PDBs conflict | Overlapping pod selectors | Use more specific selectors or combine into single PDB |
| StatefulSet pods not protected | PDB doesn't account for pod identity | Use maxUnavailable: 1 for ordered StatefulSet updates |
| Cluster autoscaler ignores PDBs | Missing respect-pdb configuration | Add --respect-pdb=true to autoscaler deployment |
Next steps
- Configure Kubernetes horizontal pod autoscaler for dynamic scaling
- Monitor Kubernetes clusters with Prometheus and Grafana
- Implement Kubernetes Pod Security Standards and admission controllers
- Configure Kubernetes network policies for microsegmentation
- Set up Kubernetes cluster backup and disaster recovery