SIGTERM is not SIGKILL. Your application has seconds to clean up — use them wisely.
February 23, 2026 · 6 min · 1112 words · Rob Washington
Table of Contents
When Kubernetes scales down your deployment or you push a new release, your running containers receive SIGTERM. Then, after a grace period, SIGKILL. The difference between graceful and chaotic shutdown is what happens in those seconds between the two signals.
A request half-processed, a database transaction uncommitted, a file partially written — these are the artifacts of ungraceful shutdown. They create inconsistent state, failed requests, and debugging nightmares.
constserver=app.listen(3000);process.on('SIGTERM',()=>{console.log('SIGTERM received, closing HTTP server...');server.close(()=>{console.log('HTTP server closed');// Close database connections, etc.
db.end(()=>{process.exit(0);});});});
For queue workers, the pattern is similar but focused on job completion:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
importsignalclassWorker:def__init__(self):self.should_stop=Falsesignal.signal(signal.SIGTERM,self.handle_shutdown)defhandle_shutdown(self,signum,frame):print("Shutdown requested, finishing current job...")self.should_stop=Truedefrun(self):whilenotself.should_stop:job=self.queue.get(timeout=1)ifjob:self.process(job)# Runs to completionself.queue.ack(job)print("Clean shutdown complete")
The key: check should_stop between jobs, not during. Never abandon a job mid-processing.
Connection pools should be drained, not abandoned:
1
2
3
4
5
6
7
8
9
10
11
12
13
importatexitdefcleanup_db():print("Closing database connections...")connection_pool.closeall()print("Database connections closed")atexit.register(cleanup_db)# Or explicitly in signal handlerdefhandle_sigterm(signum,frame):cleanup_db()sys.exit(0)
Abandoned connections linger on the database server until timeout. Clean closure frees resources immediately.
apiVersion:v1kind:Podspec:terminationGracePeriodSeconds:60# Time before SIGKILLcontainers:- name:applifecycle:preStop:exec:command:["/bin/sh","-c","sleep 5"]# Wait for LB to drain
The preStop hook runs before SIGTERM. Use it to:
Wait for load balancer to stop sending traffic
Deregister from service discovery
Send notifications
Why the sleep? Load balancers don’t instantly stop routing traffic when a pod starts terminating. A brief sleep ensures in-flight requests have somewhere to go.
# Bad: SIGTERM kills process immediately with no cleanupif__name__=="__main__":app.run()
Infinite cleanup:
1
2
3
4
# Bad: cleanup can hang foreverdefcleanup():whilepending_items:# What if this never empties?process(pending_items.pop())
Always have timeouts in cleanup code.
Assuming instant propagation:
1
2
3
# Bad: assuming load balancer instantly knows we're gonedefhandle_sigterm(signum,frame):sys.exit(0)# Requests in flight get connection reset
Give the infrastructure time to catch up.
Graceful shutdown is the difference between “the deploy went fine” and “we had a spike of 500s during the deploy.” It’s a few dozen lines of signal handling that prevent hours of debugging and customer complaints.
Handle SIGTERM. Stop accepting. Finish processing. Clean up. Exit. Your future self will thank you.
📬 Get the Newsletter
Weekly insights on DevOps, automation, and CLI mastery. No spam, unsubscribe anytime.