jq Mastery: JSON Processing on the Command Line

Every API returns JSON. Every config file is JSON. If you’re not fluent in jq, you’re copying data by hand like it’s 1995. The Basics 1 2 3 4 5 6 7 8 9 10 # Pretty print echo '{"name":"test","value":42}' | jq '.' # Extract a field echo '{"name":"test","value":42}' | jq '.name' # "test" # Raw output (no quotes) echo '{"name":"test","value":42}' | jq -r '.name' # test Working with APIs 1 2 3 4 5 6 7 8 # GitHub API curl -s https://api.github.com/users/torvalds | jq '.login, .public_repos' # Extract specific fields curl -s https://api.github.com/repos/stedolan/jq | jq '{name, stars: .stargazers_count, language}' # AWS CLI (already outputs JSON) aws ec2 describe-instances | jq '.Reservations[].Instances[] | {id: .InstanceId, state: .State.Name}' Array Operations 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # Sample data DATA='[{"name":"alice","age":30},{"name":"bob","age":25},{"name":"carol","age":35}]' # First element echo $DATA | jq '.[0]' # Last element echo $DATA | jq '.[-1]' # Slice echo $DATA | jq '.[0:2]' # All names echo $DATA | jq '.[].name' # Array of names echo $DATA | jq '[.[].name]' # Length echo $DATA | jq 'length' Filtering 1 2 3 4 5 6 7 8 9 10 11 # Select by condition echo $DATA | jq '.[] | select(.age > 28)' # Multiple conditions echo $DATA | jq '.[] | select(.age > 25 and .name != "carol")' # Contains echo '[{"tags":["web","api"]},{"tags":["cli"]}]' | jq '.[] | select(.tags | contains(["api"]))' # Has key echo '{"a":1,"b":null}' | jq 'has("a"), has("c")' Transformation 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 # Add/modify fields echo '{"name":"test"}' | jq '. + {status: "active", count: 0}' # Update existing field echo '{"count":5}' | jq '.count += 1' # Delete field echo '{"a":1,"b":2,"c":3}' | jq 'del(.b)' # Rename key echo '{"old_name":"value"}' | jq '{new_name: .old_name}' # Map over array echo '[1,2,3,4,5]' | jq 'map(. * 2)' # Map with objects echo $DATA | jq 'map({username: .name, birth_year: (2026 - .age)})' String Operations 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 # Concatenation echo '{"first":"John","last":"Doe"}' | jq '.first + " " + .last' # String interpolation echo '{"name":"test","ver":"1.0"}' | jq '"\(.name)-\(.ver).tar.gz"' # Split echo '{"path":"/usr/local/bin"}' | jq '.path | split("/")' # Join echo '["a","b","c"]' | jq 'join(",")' # Upper/lower echo '"Hello World"' | jq 'ascii_downcase' echo '"Hello World"' | jq 'ascii_upcase' # Test regex echo '{"email":"test@example.com"}' | jq '.email | test("@")' # Replace echo '"hello world"' | jq 'gsub("world"; "jq")' Conditionals 1 2 3 4 5 6 7 8 9 10 11 # If-then-else echo '{"status":200}' | jq 'if .status == 200 then "ok" else "error" end' # Alternative operator (default value) echo '{"a":1}' | jq '.b // "default"' # Null handling echo '{"a":null}' | jq '.a // "was null"' # Error handling echo '{}' | jq '.missing.nested // "not found"' Grouping and Aggregation 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 LOGS='[ {"level":"error","msg":"failed"}, {"level":"info","msg":"started"}, {"level":"error","msg":"timeout"}, {"level":"info","msg":"completed"} ]' # Group by field echo $LOGS | jq 'group_by(.level)' # Count per group echo $LOGS | jq 'group_by(.level) | map({level: .[0].level, count: length})' # Unique values echo $LOGS | jq '[.[].level] | unique' # Sort echo $DATA | jq 'sort_by(.age)' # Reverse sort echo $DATA | jq 'sort_by(.age) | reverse' # Min/max echo '[5,2,8,1,9]' | jq 'min, max' # Sum echo '[1,2,3,4,5]' | jq 'add' # Average echo '[1,2,3,4,5]' | jq 'add / length' Constructing Output 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 # Build new object curl -s https://api.github.com/users/torvalds | jq '{ username: .login, repos: .public_repos, profile: .html_url }' # Build array echo '{"users":[{"name":"a"},{"name":"b"}]}' | jq '[.users[].name]' # Multiple outputs to array echo '{"a":1,"b":2}' | jq '[.a, .b, .a + .b]' # Key-value pairs echo '{"a":1,"b":2}' | jq 'to_entries' # [{"key":"a","value":1},{"key":"b","value":2}] # Back to object echo '[{"key":"a","value":1}]' | jq 'from_entries' # Transform keys echo '{"old_a":1,"old_b":2}' | jq 'with_entries(.key |= ltrimstr("old_"))' Real-World Examples Parse AWS Instance List 1 2 3 4 5 aws ec2 describe-instances | jq -r ' .Reservations[].Instances[] | [.InstanceId, .State.Name, (.Tags[]? | select(.Key=="Name") | .Value) // "unnamed"] | @tsv ' Filter Docker Containers 1 2 3 4 5 6 docker inspect $(docker ps -q) | jq '.[] | { name: .Name, image: .Config.Image, status: .State.Status, ip: .NetworkSettings.IPAddress }' Process Log Files 1 2 3 4 5 6 7 # Count errors by type cat app.log | jq -s 'group_by(.error_type) | map({type: .[0].error_type, count: length}) | sort_by(.count) | reverse' # Extract errors from last hour cat app.log | jq --arg cutoff "$(date -d '1 hour ago' -Iseconds)" ' select(.timestamp > $cutoff and .level == "error") ' Transform Config Files 1 2 3 4 5 6 7 8 # Merge configs jq -s '.[0] * .[1]' base.json override.json # Update nested value jq '.database.host = "newhost.example.com"' config.json # Add to array jq '.allowed_ips += ["10.0.0.5"]' config.json Generate Reports 1 2 3 4 5 6 # Kubernetes pod status kubectl get pods -o json | jq -r ' .items[] | [.metadata.name, .status.phase, (.status.containerStatuses[0].restartCount // 0)] | @tsv ' | column -t Useful Flags 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 # Compact output (no pretty print) jq -c '.' # Raw output (no quotes on strings) jq -r '.name' # Raw input (treat input as string, not JSON) jq -R 'split(",")' # Slurp (read all inputs into array) cat *.json | jq -s '.' # Pass variable jq --arg name "test" '.name = $name' # Pass JSON variable jq --argjson count 42 '.count = $count' # Read from file jq --slurpfile users users.json '.users = $users' # Exit with error if output is null/false jq -e '.important_field' && echo "exists" # Sort keys in output jq -S '.' Output Formats 1 2 3 4 5 6 7 8 9 10 11 12 13 14 # Tab-separated echo $DATA | jq -r '.[] | [.name, .age] | @tsv' # CSV echo $DATA | jq -r '.[] | [.name, .age] | @csv' # URI encoding echo '{"q":"hello world"}' | jq -r '.q | @uri' # Base64 echo '{"data":"secret"}' | jq -r '.data | @base64' # Shell-safe echo '{"cmd":"echo hello"}' | jq -r '.cmd | @sh' Debugging 1 2 3 4 5 6 7 8 9 10 11 # Show type echo '{"a":[1,2,3]}' | jq '.a | type' # Show keys echo '{"a":1,"b":2}' | jq 'keys' # Debug output (shows intermediate values) echo '{"x":{"y":{"z":1}}}' | jq '.x | debug | .y | debug | .z' # Path to value echo '{"a":{"b":{"c":1}}}' | jq 'path(.. | select(. == 1))' Quick Reference 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 # Identity . # Field access .field .field.nested # Array access .[0] .[-1] .[2:5] # Iterate array .[] # Pipe .[] | .name # Collect into array [.[] | .name] # Object construction {newkey: .oldkey} # Conditionals if COND then A else B end VALUE // DEFAULT # Comparison ==, !=, <, >, <=, >= and, or, not # Array functions map(f), select(f), sort_by(f), group_by(f), unique, length, first, last, nth(n), flatten, reverse, contains(x), inside(x), add, min, max # String functions split(s), join(s), test(re), match(re), gsub(re;s), ascii_downcase, ascii_upcase, ltrimstr(s), rtrimstr(s), startswith(s), endswith(s) # Object functions keys, values, has(k), in(o), to_entries, from_entries, with_entries(f) # Type functions type, isnumber, isstring, isnull, isboolean, isarray, isobject jq turns JSON from a data format into a query language. Once you internalize the pipe-and-filter model, you’ll wonder how you ever survived without it. ...

February 25, 2026 Â· 7 min Â· 1346 words Â· Rob Washington

Systemd Timers: The Modern Cron Replacement

Cron has run scheduled tasks since 1975. It works, but systemd timers offer significant advantages: integrated logging, dependency management, randomized delays, and calendar-based scheduling that actually makes sense. Why Switch from Cron? Logging: Timer output goes to journald. No more digging through mail or custom log files. Dependencies: Wait for network, mounts, or other services before running. Accuracy: Monotonic timers don’t drift. Calendar timers handle DST correctly. Visibility: systemctl list-timers shows all scheduled jobs and when they’ll run next. ...

February 25, 2026 Â· 6 min Â· 1181 words Â· Rob Washington

Environment Variables Done Right: 12-Factor Config in Practice

The third factor of the 12-Factor App methodology states: “Store config in the environment.” Simple advice that’s surprisingly easy to get wrong. The Core Principle Configuration that varies between environments (dev, staging, production) should come from environment variables, not code. This includes: Database connection strings API keys and secrets Feature flags Service URLs Port numbers Log levels What stays in code: application logic, default behaviors, anything that doesn’t change between deploys. ...

February 25, 2026 Â· 6 min Â· 1182 words Â· Rob Washington

Makefiles for Modern Development: Beyond C Compilation

Make was designed for compiling C programs in 1976. Nearly 50 years later, it’s still one of the most practical automation tools available—not for its original purpose, but as a universal task runner. Why Make in 2026? It’s already installed. Every Unix system has make. No npm install, no pip, no version managers. It’s declarative. Define what you want, not how to get there (with dependencies handled automatically). It’s documented. make help can list all your targets. The Makefile itself is documentation. ...

February 25, 2026 Â· 7 min Â· 1444 words Â· Rob Washington

SSH Config Mastery: Organize Your Connections Like a Pro

If you’re still typing ssh -i ~/.ssh/my-key.pem -p 2222 admin@192.168.1.50 every time you connect, you’re doing it wrong. The SSH config file is one of the most underutilized productivity tools in a developer’s arsenal. The Basics: ~/.ssh/config Create or edit ~/.ssh/config: 1 2 3 4 5 Host dev HostName dev.example.com User deploy IdentityFile ~/.ssh/deploy_key Port 22 Now you just type ssh dev. That’s it. Host Patterns Wildcards let you apply settings to multiple hosts: ...

February 25, 2026 Â· 5 min Â· 955 words Â· Rob Washington

GitHub Actions Self-Hosted Runners: Complete Setup Guide

When GitHub-hosted runners aren’t enough—when you need GPU access, specific hardware, private network connectivity, or just want to stop paying per-minute—self-hosted runners are the answer. Why Self-Hosted? Performance: Your hardware, your speed. No cold starts, local caching, faster artifact access. Cost: After a certain threshold, self-hosted is dramatically cheaper. GitHub-hosted minutes add up fast for active repos. Access: Private networks, internal services, specialized hardware, air-gapped environments. Control: Exact OS versions, pre-installed dependencies, custom security configurations. ...

February 25, 2026 Â· 5 min Â· 1008 words Â· Rob Washington

Infrastructure Testing: Validating Your IaC Before Production

You test your application code. Why not your infrastructure code? Infrastructure as Code (IaC) has the same failure modes as any software: bugs, regressions, unintended side effects. Yet most teams treat Terraform and Ansible like configuration files rather than code that deserves tests. Why Infrastructure Testing Matters A Terraform plan looks correct until it: Creates a security group that’s too permissive Deploys to the wrong availability zone Sets instance types that exceed your budget Breaks networking in ways that only manifest at runtime Manual review catches some issues. Automated testing catches more. ...

February 16, 2026 Â· 6 min Â· 1115 words Â· Rob Washington

Structured Logging: Making Logs Queryable and Actionable

Plain text logs are for humans. Structured logs are for machines. In production, machines need to read your logs before humans do. When your service handles thousands of requests per second, grep stops working. You need logs that can be indexed, queried, aggregated, and alerted on. That means structure. The Problem with Text Logs [ [ [ 2 2 2 0 0 0 2 2 2 6 6 6 - - - 0 0 0 2 2 2 - - - 1 1 1 6 6 6 0 0 0 8 8 8 : : : 3 3 3 0 0 0 : : : 1 1 1 5 6 7 ] ] ] I E W N R A F R R O O N : R : : U H s P i e a g r y h m j e m o n e h t m n o @ f r e a y x i a l u m e s p d a l g e f e . o c r d o e m o t r e l d c o e t g r e g d e 1 : d 2 3 8 i 4 7 n 5 % f - r o i m n s 1 u 9 f 2 f . i 1 c 6 i 8 e . n 1 t . 5 f 0 u n d s Looks readable. But try answering: ...

February 16, 2026 Â· 7 min Â· 1406 words Â· Rob Washington

Blue-Green Deployments: Zero-Downtime Releases with Instant Rollback

What if you could deploy with a safety net? Blue-green deployments give you exactly that: two identical production environments, one serving traffic while the other waits in the wings. Deploy to the idle environment, test it, then switch traffic instantly. If something breaks, switch back. No rollback procedure—just flip. The Core Concept You maintain two identical environments: Blue: Currently serving production traffic Green: Idle, ready for the next release Deployment flow: ...

February 16, 2026 Â· 7 min Â· 1343 words Â· Rob Washington

Database Migrations: Schema Changes Without Downtime

The scariest deploy isn’t code—it’s schema changes. One wrong migration can lock tables, corrupt data, or bring down production. Zero-downtime migrations require discipline, but they’re achievable. The Problem Traditional migrations assume you can take the database offline: 1 2 -- Dangerous in production ALTER TABLE users ADD COLUMN phone VARCHAR(20) NOT NULL; This locks the table, blocks all reads and writes, and fails if any existing rows lack a value. In a busy system, that’s an outage. ...

February 16, 2026 Â· 6 min Â· 1095 words Â· Rob Washington