Unstructured logs are technical debt. Structured logs are queryable, parseable, and actually useful when things break.
The Problem# # 2 2 2 0 0 0 U 2 2 2 n 6 6 6 s - - - t 0 0 0 r 2 2 2 u - - - c 2 2 2 t 8 8 8 u r 1 1 1 e 0 0 0 d : : : : 1 1 1 5 5 5 g : : : o 2 2 2 o 3 4 5 d I E I l N R N u F R F c O O O k R U R p s F e a e a q r r i u s l e i a e s n l d t g i c t c t e o o h m i l p p s o r l g o e g c t e e e d s d s i i n o n r f d 2 r e 3 o r 4 m m 1 s 1 2 9 3 2 4 . 5 1 : 6 8 c . o 1 n . n 1 e c t i o n t i m e o u t
Regex hell when you need to extract user, IP, order ID, or duration.
The Solution# 1
2
3
{ "timestamp" : "2026-02-28T10:15:23Z" , "level" : "info" , "event" : "user_login" , "user" : "alice" , "ip" : "192.168.1.1" }
{ "timestamp" : "2026-02-28T10:15:24Z" , "level" : "error" , "event" : "order_failed" , "order_id" : 12345 , "error" : "connection timeout" }
{ "timestamp" : "2026-02-28T10:15:25Z" , "level" : "info" , "event" : "request_completed" , "duration_ms" : 234 }
Now you can query: level:error AND order_id:12345
Python with structlog# 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import structlog
structlog . configure (
processors = [
structlog . processors . TimeStamper ( fmt = "iso" ),
structlog . processors . add_log_level ,
structlog . processors . JSONRenderer ()
]
)
log = structlog . get_logger ()
# Usage
log . info ( "user_login" , user = "alice" , ip = "192.168.1.1" )
log . error ( "order_failed" , order_id = 12345 , error = "connection timeout" )
Output:
1
{ "event" : "user_login" , "user" : "alice" , "ip" : "192.168.1.1" , "level" : "info" , "timestamp" : "2026-02-28T10:15:23Z" }
Node.js with pino# 1
2
3
4
5
const pino = require ( 'pino' );
const log = pino ();
log . info ({ user : 'alice' , ip : '192.168.1.1' }, 'user_login' );
log . error ({ orderId : 12345 , error : 'connection timeout' }, 'order_failed' );
Pino is the fastest JSON logger for Node.
Go with slog (stdlib)# 1
2
3
4
5
6
7
8
9
10
11
import "log/slog"
logger := slog . New ( slog . NewJSONHandler ( os . Stdout , nil ))
logger . Info ( "user_login" ,
slog . String ( "user" , "alice" ),
slog . String ( "ip" , "192.168.1.1" ))
logger . Error ( "order_failed" ,
slog . Int ( "order_id" , 12345 ),
slog . String ( "error" , "connection timeout" ))
Go 1.21+ includes slog in the standard library.
Context Propagation# Add request context to all logs automatically:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# structlog with context
from structlog.contextvars import bind_contextvars , clear_contextvars
@app.middleware ( "http" )
async def logging_middleware ( request , call_next ):
clear_contextvars ()
bind_contextvars (
request_id = request . headers . get ( "x-request-id" ),
user_id = request . state . user_id ,
path = request . url . path
)
return await call_next ( request )
# Now all logs include request_id, user_id, path
log . info ( "processing_started" )
# {"event":"processing_started","request_id":"abc123","user_id":"42","path":"/api/orders"}
What to Log# Always include:
Timestamp (ISO 8601) Level (debug, info, warn, error) Event name (what happened) Request/trace ID (for correlation) Include when relevant:
User ID Duration Error details Input parameters (sanitized) Never include:
Passwords API keys Credit card numbers PII without consent Log Levels That Mean Something# 1
2
3
4
5
log . debug ( "cache_lookup" , key = "user:123" ) # Development only
log . info ( "order_created" , order_id = 456 ) # Normal operations
log . warning ( "rate_limit_approaching" , usage = 95 ) # Needs attention soon
log . error ( "payment_failed" , error = "declined" ) # Action required
log . critical ( "database_down" ) # Wake someone up
If you page on warnings, you’re doing it wrong.
Querying in Practice# With Loki/Grafana:
{ a p p = " m y a p p " } | j s o n | l e v e l = " e r r o r " | o r d e r _ i d = 1 2 3 4 5
With CloudWatch Logs Insights:
1
2
3
fields @ timestamp , @ message
| filter level = "error"
| filter order_id = 12345
With Elasticsearch:
1
2
3
4
{ "query" : { "bool" : { "must" : [
{ "term" : { "level" : "error" }},
{ "term" : { "order_id" : 12345 }}
]}}}
1
2
3
4
5
# Bad: string formatting happens even if debug is disabled
log . debug ( f "Processing { len ( items ) } items: { items } " )
# Good: lazy evaluation
log . debug ( "processing_items" , count = len ( items ), items = items )
Structured logging libraries skip serialization if the level is disabled.
Migration Strategy# Add structured logger alongside existing Log both formats temporarily Update log aggregation to parse JSON Remove old text logging Update dashboards and alerts The Standard Fields# Pick a convention and stick to it:
1
2
3
4
5
6
7
8
9
10
11
12
13
{
"timestamp" : "2026-02-28T10:15:23.456Z" ,
"level" : "info" ,
"logger" : "myapp.orders" ,
"event" : "order_created" ,
"trace_id" : "abc123" ,
"span_id" : "def456" ,
"order_id" : 789 ,
"user_id" : "alice" ,
"amount_cents" : 4999 ,
"duration_ms" : 234
}
Consistent naming makes querying trivial.
Structured logging is a small investment that pays off every time you debug production. The first incident you solve in 5 minutes instead of 50 will justify the effort.