Unstructured logs are technical debt. Structured logs are queryable, parseable, and actually useful when things break.

The Problem

#222000U222n666s---t000r222u---c222t888ur111e000d::::111555g:::o222o345dIEIlNRNuFRFcOOOkRURpsFeaeaqrriusleiaesnldtgictcteoohmilppsorlgoegcteeedsdsiinonrfd2re3or4mm1s129324.51:68c.o1n.n1ectiontimeout

Regex hell when you need to extract user, IP, order ID, or duration.

The Solution

1
2
3
{"timestamp":"2026-02-28T10:15:23Z","level":"info","event":"user_login","user":"alice","ip":"192.168.1.1"}
{"timestamp":"2026-02-28T10:15:24Z","level":"error","event":"order_failed","order_id":12345,"error":"connection timeout"}
{"timestamp":"2026-02-28T10:15:25Z","level":"info","event":"request_completed","duration_ms":234}

Now you can query: level:error AND order_id:12345

Python with structlog

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
import structlog

structlog.configure(
    processors=[
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.add_log_level,
        structlog.processors.JSONRenderer()
    ]
)

log = structlog.get_logger()

# Usage
log.info("user_login", user="alice", ip="192.168.1.1")
log.error("order_failed", order_id=12345, error="connection timeout")

Output:

1
{"event":"user_login","user":"alice","ip":"192.168.1.1","level":"info","timestamp":"2026-02-28T10:15:23Z"}

Node.js with pino

1
2
3
4
5
const pino = require('pino');
const log = pino();

log.info({ user: 'alice', ip: '192.168.1.1' }, 'user_login');
log.error({ orderId: 12345, error: 'connection timeout' }, 'order_failed');

Pino is the fastest JSON logger for Node.

Go with slog (stdlib)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
import "log/slog"

logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))

logger.Info("user_login", 
    slog.String("user", "alice"),
    slog.String("ip", "192.168.1.1"))

logger.Error("order_failed",
    slog.Int("order_id", 12345),
    slog.String("error", "connection timeout"))

Go 1.21+ includes slog in the standard library.

Context Propagation

Add request context to all logs automatically:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
# structlog with context
from structlog.contextvars import bind_contextvars, clear_contextvars

@app.middleware("http")
async def logging_middleware(request, call_next):
    clear_contextvars()
    bind_contextvars(
        request_id=request.headers.get("x-request-id"),
        user_id=request.state.user_id,
        path=request.url.path
    )
    return await call_next(request)

# Now all logs include request_id, user_id, path
log.info("processing_started")  
# {"event":"processing_started","request_id":"abc123","user_id":"42","path":"/api/orders"}

What to Log

Always include:

  • Timestamp (ISO 8601)
  • Level (debug, info, warn, error)
  • Event name (what happened)
  • Request/trace ID (for correlation)

Include when relevant:

  • User ID
  • Duration
  • Error details
  • Input parameters (sanitized)

Never include:

  • Passwords
  • API keys
  • Credit card numbers
  • PII without consent

Log Levels That Mean Something

1
2
3
4
5
log.debug("cache_lookup", key="user:123")      # Development only
log.info("order_created", order_id=456)         # Normal operations
log.warning("rate_limit_approaching", usage=95) # Needs attention soon  
log.error("payment_failed", error="declined")   # Action required
log.critical("database_down")                   # Wake someone up

If you page on warnings, you’re doing it wrong.

Querying in Practice

With Loki/Grafana:

{app="myapp"}|json|level="error"|order_id=12345

With CloudWatch Logs Insights:

1
2
3
fields @timestamp, @message
| filter level = "error"
| filter order_id = 12345

With Elasticsearch:

1
2
3
4
{"query": {"bool": {"must": [
  {"term": {"level": "error"}},
  {"term": {"order_id": 12345}}
]}}}

Performance Considerations

1
2
3
4
5
# Bad: string formatting happens even if debug is disabled
log.debug(f"Processing {len(items)} items: {items}")

# Good: lazy evaluation
log.debug("processing_items", count=len(items), items=items)

Structured logging libraries skip serialization if the level is disabled.

Migration Strategy

  1. Add structured logger alongside existing
  2. Log both formats temporarily
  3. Update log aggregation to parse JSON
  4. Remove old text logging
  5. Update dashboards and alerts

The Standard Fields

Pick a convention and stick to it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
{
  "timestamp": "2026-02-28T10:15:23.456Z",
  "level": "info",
  "logger": "myapp.orders",
  "event": "order_created",
  "trace_id": "abc123",
  "span_id": "def456",
  
  "order_id": 789,
  "user_id": "alice",
  "amount_cents": 4999,
  "duration_ms": 234
}

Consistent naming makes querying trivial.

Structured logging is a small investment that pays off every time you debug production. The first incident you solve in 5 minutes instead of 50 will justify the effort.