Your pod is stuck in CrashLoopBackOff. Kubernetes keeps restarting it, each time waiting longer before trying again. Here’s how to diagnose and fix it.
What CrashLoopBackOff Actually Means
CrashLoopBackOff isn’t the error itself — it’s Kubernetes telling you “this container keeps crashing, so I’m backing off on restarts.”
The actual problem is that your container exits with a non-zero exit code. Kubernetes notices, restarts it, it crashes again, and the backoff timer increases: 10s, 20s, 40s, up to 5 minutes.
Step 1: Check Pod Status
Start with the basics:
| |
The RESTARTS column tells you how many times it’s crashed. High numbers mean this has been going on for a while.
Step 2: Get Pod Details
| |
Scroll to the Events section at the bottom:
Also check the Last State section for the exit code:
Exit codes matter:
- Exit Code 1: Generic error (application crashed)
- Exit Code 137: OOMKilled (out of memory)
- Exit Code 139: Segmentation fault
- Exit Code 143: SIGTERM (graceful shutdown, usually not an error)
Step 3: Check the Logs
This is where you’ll usually find the answer:
| |
If the container already crashed, get logs from the previous instance:
| |
Common Causes and Fixes
1. Application Error on Startup
Symptom: Logs show your application throwing an exception
Fix: Check your Dockerfile’s CMD or ENTRYPOINT. The path might be wrong, or files weren’t copied correctly.
| |
2. Missing Environment Variables
Symptom: Logs show config errors or undefined variables
Fix: Add the missing environment variable to your deployment:
| |
3. Database/Service Not Ready
Symptom: Connection refused errors on startup
Fix: Add an init container to wait for dependencies:
| |
4. Out of Memory (OOMKilled)
Symptom: Exit code 137, or describe pod shows OOMKilled
Fix: Increase memory limits:
| |
5. Liveness Probe Failing
Symptom: Pod keeps restarting even though logs look fine
Kubernetes might be killing your pod because the liveness probe fails. Check your probe configuration:
| |
If your app takes time to initialize, increase initialDelaySeconds.
6. Wrong Command or Entrypoint
Symptom: Container exits immediately with no logs
Your container might be running the wrong command and exiting instantly.
Debug by running the container interactively:
| |
Then manually run your start command to see what happens.
Quick Debugging Checklist
| |
The Fix That Works 90% of the Time
Most CrashLoopBackOff issues are one of these:
- Check the logs (
kubectl logs --previous) — the error message tells you what’s wrong - Verify environment variables — missing config is extremely common
- Increase memory limits — if exit code is 137
- Fix liveness probes — add proper
initialDelaySeconds
The container logs almost always have the answer. Start there.