February 28, 2026 · 6 min · 1071 words · Rob Washington
Table of Contents
Redis is often introduced as “a cache,” but that undersells it. Here are patterns that leverage Redis for rate limiting, sessions, queues, and real-time features.
importredisimporttimer=redis.Redis()defis_rate_limited(user_id:str,limit:int=100,window:int=60)->bool:"""Allow `limit` requests per `window` seconds."""key=f"ratelimit:{user_id}"now=time.time()pipe=r.pipeline()pipe.zremrangebyscore(key,0,now-window)# Remove old entriespipe.zadd(key,{str(now):now})# Add current requestpipe.zcard(key)# Count requests in windowpipe.expire(key,window)# Auto-cleanupresults=pipe.execute()request_count=results[2]returnrequest_count>limit
Using a sorted set with timestamps gives you a true sliding window, not just fixed buckets.
importuuiddefacquire_lock(lock_name:str,timeout:int=10)->str|None:"""Returns lock_id if acquired, None if already locked."""lock_id=str(uuid.uuid4())acquired=r.set(f"lock:{lock_name}",lock_id,nx=True,# Only set if doesn't existex=timeout# Auto-expire to prevent deadlocks)returnlock_idifacquiredelseNonedefrelease_lock(lock_name:str,lock_id:str)->bool:"""Release lock only if we own it."""script="""
if redis.call("get", KEYS[1]) == ARGV[1] then
return redis.call("del", KEYS[1])
else
return 0
end
"""returnr.eval(script,1,f"lock:{lock_name}",lock_id)==1
The Lua script ensures atomic check-and-delete. Without it, you risk releasing someone else’s lock.
importjsonfromdatetimeimporttimedeltadefsave_session(session_id:str,data:dict,ttl:int=3600):r.setex(f"session:{session_id}",ttl,json.dumps(data))defget_session(session_id:str)->dict|None:data=r.get(f"session:{session_id}")ifdata:r.expire(f"session:{session_id}",3600)# Extend on accessreturnjson.loads(data)returnNonedefdestroy_session(session_id:str):r.delete(f"session:{session_id}")
Session data survives server restarts. Multiple app servers share state. TTL handles cleanup.
defget_user(user_id:str)->dict:cache_key=f"user:{user_id}"# Try cache firstcached=r.get(cache_key)ifcached:returnjson.loads(cached)# Cache miss: fetch from DBuser=db.query("SELECT * FROM users WHERE id = %s",user_id)# Populate cache with jittered TTL (prevents stampede)ttl=3600+random.randint(0,300)r.setex(cache_key,ttl,json.dumps(user))returnuserdefinvalidate_user(user_id:str):r.delete(f"user:{user_id}")
The random TTL jitter prevents cache stampedes when many keys expire simultaneously.
# Do this once at startuppool=redis.ConnectionPool(host='localhost',port=6379,max_connections=50,decode_responses=True)r=redis.Redis(connection_pool=pool)# Use `r` everywhere
# Check memory usageredis-cli INFO memory
# Find big keysredis-cli --bigkeys
# Set memory limitCONFIG SET maxmemory 2gb
CONFIG SET maxmemory-policy allkeys-lru
allkeys-lru evicts least-recently-used keys when memory is full. Good for caches.
Redis is a database that happens to be fast, not just a cache that happens to persist. Use it accordingly.
📬 Get the Newsletter
Weekly insights on DevOps, automation, and CLI mastery. No spam, unsubscribe anytime.