Unstructured logs are a trap. They look simple until you need to find something.

[[[222000222666---000222---222777000555:::333000:::111567]]]IEWNRAFRROONRUHsFieagrihljemodehmnto@oreyxparumospcalegese.scdooemrtdelecortgeg1de2:d348i57n:%fcroonmne1c9t2i.o1n68t.i1m.e5o0ut

Quick: find all login failures from a specific IP range in the last hour. Now try parsing the order ID from error messages. Hope you enjoy regex.

Structured Logs Change Everything

Same events, structured:

1
2
3
{"timestamp":"2026-02-27T05:30:15Z","level":"info","event":"user_login","user":"john@example.com","ip":"192.168.1.50","success":true}
{"timestamp":"2026-02-27T05:30:16Z","level":"error","event":"order_processing","order_id":12345,"error":"connection_timeout","retry_count":3}
{"timestamp":"2026-02-27T05:30:17Z","level":"warn","event":"resource_alert","resource":"memory","usage_pct":87,"threshold_pct":80}

Now queries become trivial:

1
2
3
4
5
# Login failures from subnet
jq 'select(.event=="user_login" and .success==false and (.ip | startswith("192.168.1.")))' logs.json

# Orders with timeout errors
jq 'select(.event=="order_processing" and .error=="connection_timeout") | .order_id' logs.json

Fields are fields. No parsing, no regex, no guessing.

Choosing a Format

JSON is the default choice. Universal parser support, works with every log aggregator (ELK, Loki, Datadog), human-readable enough for debugging.

Logfmt is more compact: level=info event=user_login user=john@example.com. Great for high-volume systems where bytes matter.

JSON Lines (JSONL) — one JSON object per line — is the sweet spot for most applications. Streamable, greppable, parseable.

Implementation Patterns

Python (structlog)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
import structlog

structlog.configure(
    processors=[
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.add_log_level,
        structlog.processors.JSONRenderer()
    ]
)

logger = structlog.get_logger()

logger.info("user_login", user=email, ip=request.remote_addr, success=True)
logger.error("order_processing", order_id=order.id, error="connection_timeout", retry_count=3)

Node.js (pino)

1
2
3
4
5
const pino = require('pino');
const logger = pino({ level: 'info' });

logger.info({ event: 'user_login', user: email, ip: req.ip, success: true });
logger.error({ event: 'order_processing', orderId: order.id, error: 'connection_timeout', retryCount: 3 });

Pino is blazingly fast — it writes JSON directly without intermediate object creation.

Go (zerolog)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
import "github.com/rs/zerolog/log"

log.Info().
    Str("event", "user_login").
    Str("user", email).
    Str("ip", remoteAddr).
    Bool("success", true).
    Msg("")

log.Error().
    Str("event", "order_processing").
    Int("order_id", orderID).
    Str("error", "connection_timeout").
    Int("retry_count", 3).
    Msg("")

Bash

Even shell scripts can emit structured logs:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
log_json() {
    local level="$1" event="$2"
    shift 2
    local fields=""
    while [[ $# -gt 0 ]]; do
        fields+="\"$1\":\"$2\","
        shift 2
    done
    echo "{\"timestamp\":\"$(date -Iseconds)\",\"level\":\"$level\",\"event\":\"$event\",${fields%,}}"
}

log_json info script_start script_name "$0" pid "$$"
log_json error file_missing path "/etc/config.yaml" action "skipping"

Essential Fields

Every log entry should include:

FieldPurpose
timestampISO 8601 format, always UTC
leveldebug, info, warn, error, fatal
eventWhat happened (snake_case verb)
serviceWhich service emitted this
trace_idRequest correlation (if applicable)

Beyond that, add context relevant to the event. For HTTP requests: method, path, status, latency. For database queries: query type, table, duration. For errors: error type, message, stack trace.

Context Propagation

The real power comes from automatic context. Add fields once, include them everywhere:

1
2
3
4
# Python with structlog
logger = logger.bind(request_id=request.id, user_id=user.id)
logger.info("started_checkout")  # Includes request_id and user_id automatically
logger.info("payment_processed", amount=99.99)  # Still includes them
1
2
3
4
// Node.js with pino
const reqLogger = logger.child({ requestId: req.id, userId: user.id });
reqLogger.info({ event: 'started_checkout' });
reqLogger.info({ event: 'payment_processed', amount: 99.99 });

Now every log in that request context carries correlation IDs. Tracing a single user’s journey becomes a simple filter.

Querying in Production

With structured logs in a log aggregator:

#{#e#fsviGeEeCefsrrlnllioavatodlrfis:usttactodeneirW@r@a=cdatt"setieiLaercmvmopa_heeekirpsnsi"crLttt}hooaa(cgm=mL|esppos,"gjsIodQsinoreLonsrds)ngidecger|Ahr_Nt_peDsirvdoee,cnretres=ors"rrio:onrcrgdo"enrna_enpcdrtoiecorenrs_ostriinm=ge"o"uc|toneAnrNerDcotr@i=to"incm_oetnsintmeaecmotpui:to["nn_otwi-m1ehouTtO"now]

The query language varies, but the principle is the same: filter on fields, not regexes.

Performance Considerations

Structured logging adds overhead. A few ways to mitigate:

  1. Log asynchronously — Buffer and batch writes
  2. Sample high-frequency events — Log 1% of health checks
  3. Use efficient serializers — Pino, zerolog beat generic JSON libraries
  4. Avoid logging in hot paths — Aggregate metrics instead

For most applications, the debugging time saved vastly outweighs the microseconds spent serializing JSON.

Migration Strategy

If you’re stuck with unstructured logs:

  1. Start with new code — All new services emit structured logs
  2. Wrap existing loggers — Add a structured wrapper that calls the old logger
  3. Add correlation IDs first — Even unstructured logs benefit from trace IDs
  4. Convert high-value events — Errors, auth events, transactions
  5. Ship both formats temporarily — Old format for existing dashboards, new for modern tools

Don’t try to convert everything at once. Incremental improvement beats stalled perfection.

The Payoff

Last week I debugged a production issue by running:

1
jq 'select(.user_id=="abc123" and .timestamp>"2026-02-20")' /var/log/api/*.json | head -50

Five seconds to see exactly what one user experienced. No regex. No guessing at field positions. No hoping the log format didn’t change.

That’s the promise of structured logging: your logs become a queryable database of everything that happened. Worth the setup.


Computing Arts explores the craft of production systems. More at computingarts.com.