curl for API Testing: The Essential Commands

Before Postman, before Insomnia, there was curl. It’s still the fastest way to test an API, and it’s available on every server you’ll ever SSH into. Basic Requests 1 2 3 4 5 6 7 8 9 10 11 12 13 14 # GET request curl https://api.example.com/users # With headers shown curl -i https://api.example.com/users # Headers only curl -I https://api.example.com/users # Silent (no progress bar) curl -s https://api.example.com/users # Follow redirects curl -L https://api.example.com/old-endpoint HTTP Methods 1 2 3 4 5 6 7 8 9 10 11 # POST curl -X POST https://api.example.com/users # PUT curl -X PUT https://api.example.com/users/123 # PATCH curl -X PATCH https://api.example.com/users/123 # DELETE curl -X DELETE https://api.example.com/users/123 Sending Data JSON Body 1 2 3 4 5 6 7 8 9 10 11 curl -X POST https://api.example.com/users \ -H "Content-Type: application/json" \ -d '{"name": "Alice", "email": "alice@example.com"}' # From file curl -X POST https://api.example.com/users \ -H "Content-Type: application/json" \ -d @user.json # Pretty JSON with jq curl -s https://api.example.com/users | jq . Form Data 1 2 3 4 5 6 7 8 # URL-encoded form curl -X POST https://api.example.com/login \ -d "username=alice&password=secret" # Multipart form (file upload) curl -X POST https://api.example.com/upload \ -F "file=@photo.jpg" \ -F "description=My photo" Query Parameters 1 2 3 4 5 6 7 # In URL curl "https://api.example.com/search?q=hello&limit=10" # With --data-urlencode (safer) curl -G https://api.example.com/search \ --data-urlencode "q=hello world" \ --data-urlencode "limit=10" Headers 1 2 3 4 5 6 7 8 9 10 11 12 13 # Single header curl -H "Authorization: Bearer token123" https://api.example.com/me # Multiple headers curl -H "Authorization: Bearer token123" \ -H "Accept: application/json" \ -H "X-Request-ID: abc123" \ https://api.example.com/me # Common patterns curl -H "Content-Type: application/json" ... curl -H "Accept: application/json" ... curl -H "Authorization: Basic $(echo -n 'user:pass' | base64)" ... Authentication Bearer Token 1 curl -H "Authorization: Bearer eyJhbGc..." https://api.example.com/me Basic Auth 1 2 3 4 curl -u username:password https://api.example.com/secure # Or manually curl -H "Authorization: Basic $(echo -n 'user:pass' | base64)" https://api.example.com/secure API Key 1 2 3 4 5 # In header curl -H "X-API-Key: your-api-key" https://api.example.com/data # In query string curl "https://api.example.com/data?api_key=your-api-key" Response Handling Save to File 1 2 3 4 5 6 7 8 # Save response body curl -o response.json https://api.example.com/data # Save with remote filename curl -O https://example.com/file.zip # Save headers and body separately curl -D headers.txt -o body.json https://api.example.com/data Extract Specific Info 1 2 3 4 5 6 7 8 9 # HTTP status code only curl -s -o /dev/null -w "%{http_code}" https://api.example.com/health # Response time curl -s -o /dev/null -w "%{time_total}" https://api.example.com/health # Multiple metrics curl -s -o /dev/null -w "Status: %{http_code}\nTime: %{time_total}s\nSize: %{size_download} bytes\n" \ https://api.example.com/data Write-Out Variables 1 2 3 4 5 6 7 8 9 curl -w " HTTP Code: %{http_code} Total Time: %{time_total}s DNS Lookup: %{time_namelookup}s Connect: %{time_connect}s TTFB: %{time_starttransfer}s Download Size: %{size_download} bytes Download Speed: %{speed_download} bytes/sec " -o /dev/null -s https://api.example.com/data Debugging Verbose Output 1 2 3 4 5 6 7 8 # Show request and response headers curl -v https://api.example.com/data # Even more verbose (includes TLS handshake) curl -vv https://api.example.com/data # Trace everything (hex dump) curl --trace - https://api.example.com/data Debug TLS/SSL 1 2 3 4 5 6 7 8 9 10 11 # Show certificate info curl -vI https://api.example.com 2>&1 | grep -A 10 "Server certificate" # Skip certificate verification (don't use in production!) curl -k https://self-signed.example.com/data # Use specific CA cert curl --cacert /path/to/ca.crt https://api.example.com/data # Client certificate auth curl --cert client.crt --key client.key https://api.example.com/data Timeouts and Retries 1 2 3 4 5 6 7 8 9 10 11 # Connection timeout (seconds) curl --connect-timeout 5 https://api.example.com/data # Max time for entire operation curl --max-time 30 https://api.example.com/data # Retry on failure curl --retry 3 --retry-delay 2 https://api.example.com/data # Retry only on specific errors curl --retry 3 --retry-connrefused https://api.example.com/data Cookies 1 2 3 4 5 6 7 8 9 10 11 # Send cookie curl -b "session=abc123" https://api.example.com/dashboard # Save cookies to file curl -c cookies.txt https://api.example.com/login -d "user=alice&pass=secret" # Use saved cookies curl -b cookies.txt https://api.example.com/dashboard # Both save and send curl -b cookies.txt -c cookies.txt https://api.example.com/page Proxies 1 2 3 4 5 6 7 8 # HTTP proxy curl -x http://proxy.example.com:8080 https://api.example.com/data # SOCKS5 proxy curl --socks5 localhost:1080 https://api.example.com/data # Proxy with auth curl -x http://user:pass@proxy.example.com:8080 https://api.example.com/data Practical Examples Health Check Script 1 2 3 4 5 6 7 8 9 10 11 #!/bin/bash URL="https://api.example.com/health" STATUS=$(curl -s -o /dev/null -w "%{http_code}" --max-time 5 "$URL") if [ "$STATUS" = "200" ]; then echo "OK" exit 0 else echo "FAIL: HTTP $STATUS" exit 1 fi API Test with Error Handling 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 #!/bin/bash response=$(curl -s -w "\n%{http_code}" \ -X POST https://api.example.com/users \ -H "Content-Type: application/json" \ -d '{"name": "Test User"}') body=$(echo "$response" | head -n -1) status=$(echo "$response" | tail -n 1) if [ "$status" = "201" ]; then echo "Created user: $(echo "$body" | jq -r '.id')" else echo "Error $status: $(echo "$body" | jq -r '.error')" exit 1 fi Pagination Loop 1 2 3 4 5 6 7 8 9 10 11 12 13 #!/bin/bash page=1 while true; do response=$(curl -s "https://api.example.com/items?page=$page&per_page=100") count=$(echo "$response" | jq '.items | length') if [ "$count" = "0" ]; then break fi echo "$response" | jq -r '.items[].name' ((page++)) done OAuth Token Flow 1 2 3 4 5 6 7 8 9 # Get token TOKEN=$(curl -s -X POST https://auth.example.com/oauth/token \ -d "grant_type=client_credentials" \ -d "client_id=$CLIENT_ID" \ -d "client_secret=$CLIENT_SECRET" \ | jq -r '.access_token') # Use token curl -H "Authorization: Bearer $TOKEN" https://api.example.com/data Config Files Save common options in ~/.curlrc: ...

February 24, 2026 Â· 6 min Â· 1094 words Â· Rob Washington

Python Virtual Environments: A Practical Guide

Every Python project should have its own virtual environment. It’s not optional — it’s how you avoid dependency hell, reproducibility issues, and the dreaded “but it works on my machine.” Why Virtual Environments? Without virtual environments: Project A needs requests==2.25 Project B needs requests==2.31 Both use system Python One project breaks With virtual environments: Each project has isolated dependencies Different Python versions per project Reproducible across machines No sudo required for installing packages The Built-in Way: venv Python 3.3+ includes venv: ...

February 24, 2026 Â· 8 min Â· 1501 words Â· Rob Washington

Docker Compose Patterns: From Development to Production

Docker Compose started as a development tool but has grown into something usable for small production deployments. Here are patterns that make it work well in both contexts. Environment-Specific Overrides Base configuration with environment-specific overrides: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 # docker-compose.yml (base) services: app: image: myapp:${VERSION:-latest} environment: - DATABASE_URL depends_on: - db db: image: postgres:15 volumes: - postgres_data:/var/lib/postgresql/data volumes: postgres_data: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 # docker-compose.override.yml (auto-loaded in dev) services: app: build: . volumes: - .:/app - /app/node_modules environment: - DEBUG=true ports: - "3000:3000" db: ports: - "5432:5432" 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 # docker-compose.prod.yml services: app: restart: always deploy: replicas: 2 resources: limits: cpus: '0.5' memory: 512M logging: driver: json-file options: max-size: "10m" max-file: "3" db: restart: always Usage: ...

February 24, 2026 Â· 6 min Â· 1253 words Â· Rob Washington

jq: Command-Line JSON Processing

If you work with APIs, logs, or config files, you work with JSON. And jq is how you make that not painful. It’s like sed and awk had a baby that speaks JSON fluently. The Basics 1 2 3 4 5 6 7 8 9 10 # Pretty-print JSON curl -s https://api.example.com/data | jq . # Get a field echo '{"name": "Alice", "age": 30}' | jq '.name' # "Alice" # Get raw string (no quotes) echo '{"name": "Alice"}' | jq -r '.name' # Alice Navigating Objects 1 2 3 4 5 6 7 8 9 10 11 # Nested fields echo '{"user": {"name": "Alice", "email": "alice@example.com"}}' | jq '.user.name' # "Alice" # Multiple fields echo '{"name": "Alice", "age": 30, "city": "NYC"}' | jq '{name, city}' # {"name": "Alice", "city": "NYC"} # Rename fields echo '{"firstName": "Alice"}' | jq '{name: .firstName}' # {"name": "Alice"} Working with Arrays 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # Get first element echo '[1, 2, 3]' | jq '.[0]' # 1 # Get last element echo '[1, 2, 3]' | jq '.[-1]' # 3 # Slice echo '[1, 2, 3, 4, 5]' | jq '.[1:3]' # [2, 3] # Get all elements echo '[{"name": "Alice"}, {"name": "Bob"}]' | jq '.[].name' # "Alice" # "Bob" # Wrap results in array echo '[{"name": "Alice"}, {"name": "Bob"}]' | jq '[.[].name]' # ["Alice", "Bob"] Filtering 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 # Select where condition is true echo '[{"name": "Alice", "age": 30}, {"name": "Bob", "age": 25}]' | \ jq '.[] | select(.age > 26)' # {"name": "Alice", "age": 30} # Multiple conditions echo '[{"name": "Alice", "active": true}, {"name": "Bob", "active": false}]' | \ jq '.[] | select(.active == true and .name != "Admin")' # Contains echo '[{"tags": ["dev", "prod"]}, {"tags": ["staging"]}]' | \ jq '.[] | select(.tags | contains(["prod"]))' # String matching echo '[{"name": "Alice"}, {"name": "Bob"}, {"name": "Alicia"}]' | \ jq '.[] | select(.name | startswith("Ali"))' Transforming Data 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # Map over array echo '[1, 2, 3]' | jq 'map(. * 2)' # [2, 4, 6] # Map with objects echo '[{"name": "Alice", "score": 85}, {"name": "Bob", "score": 92}]' | \ jq 'map({name, passed: .score >= 90})' # [{"name": "Alice", "passed": false}, {"name": "Bob", "passed": true}] # Add fields echo '{"name": "Alice"}' | jq '. + {role: "admin"}' # {"name": "Alice", "role": "admin"} # Update fields echo '{"name": "alice"}' | jq '.name |= ascii_upcase' # {"name": "ALICE"} # Delete fields echo '{"name": "Alice", "password": "secret"}' | jq 'del(.password)' # {"name": "Alice"} Aggregation 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 # Length echo '[1, 2, 3, 4, 5]' | jq 'length' # 5 # Sum echo '[1, 2, 3, 4, 5]' | jq 'add' # 15 # Min/Max echo '[3, 1, 4, 1, 5]' | jq 'min, max' # 1 # 5 # Unique echo '[1, 2, 2, 3, 3, 3]' | jq 'unique' # [1, 2, 3] # Group by echo '[{"type": "a", "val": 1}, {"type": "b", "val": 2}, {"type": "a", "val": 3}]' | \ jq 'group_by(.type) | map({type: .[0].type, total: map(.val) | add})' # [{"type": "a", "total": 4}, {"type": "b", "total": 2}] # Sort echo '[{"name": "Bob"}, {"name": "Alice"}]' | jq 'sort_by(.name)' # [{"name": "Alice"}, {"name": "Bob"}] String Operations 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 # Split echo '{"path": "/usr/local/bin"}' | jq '.path | split("/")' # ["", "usr", "local", "bin"] # Join echo '["a", "b", "c"]' | jq 'join("-")' # "a-b-c" # Replace echo '{"msg": "Hello World"}' | jq '.msg | gsub("World"; "jq")' # "Hello jq" # String interpolation echo '{"name": "Alice", "age": 30}' | jq '"Name: \(.name), Age: \(.age)"' # "Name: Alice, Age: 30" Working with Keys 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # Get keys echo '{"a": 1, "b": 2, "c": 3}' | jq 'keys' # ["a", "b", "c"] # Get values echo '{"a": 1, "b": 2, "c": 3}' | jq 'values' # Doesn't exist - use: [.[]] # Convert object to array of key-value pairs echo '{"a": 1, "b": 2}' | jq 'to_entries' # [{"key": "a", "value": 1}, {"key": "b", "value": 2}] # Convert back echo '[{"key": "a", "value": 1}]' | jq 'from_entries' # {"a": 1} # Transform keys echo '{"firstName": "Alice", "lastName": "Smith"}' | \ jq 'with_entries(.key |= ascii_downcase)' # {"firstname": "Alice", "lastname": "Smith"} Conditionals 1 2 3 4 5 6 7 8 9 10 11 12 13 14 # If-then-else echo '{"status": 200}' | jq 'if .status == 200 then "OK" else "Error" end' # "OK" # Alternative operator (default value) echo '{"name": "Alice"}' | jq '.age // 0' # 0 echo '{"name": "Alice", "age": 30}' | jq '.age // 0' # 30 # Try (ignore errors) echo '{"data": "not json"}' | jq '.data | try fromjson' # null Real-World Examples Parse API Response 1 2 3 4 5 6 7 # Extract users from paginated API curl -s 'https://api.example.com/users' | \ jq '.data[] | {id, name: .attributes.name, email: .attributes.email}' # Get specific fields as TSV curl -s 'https://api.example.com/users' | \ jq -r '.data[] | [.id, .attributes.name] | @tsv' Process Log Files 1 2 3 4 5 6 7 8 # Parse JSON logs, filter errors cat app.log | jq -c 'select(.level == "error")' # Count by status code cat access.log | jq -s 'group_by(.status) | map({status: .[0].status, count: length})' # Get unique IPs cat access.log | jq -s '[.[].ip] | unique' Transform Config Files 1 2 3 4 5 6 7 8 # Merge configs jq -s '.[0] * .[1]' base.json override.json # Update nested value jq '.database.host = "newhost.example.com"' config.json # Add to array jq '.allowed_ips += ["10.0.0.5"]' config.json AWS CLI Output 1 2 3 4 5 6 7 8 9 10 11 12 # List EC2 instance IDs and states aws ec2 describe-instances | \ jq -r '.Reservations[].Instances[] | [.InstanceId, .State.Name] | @tsv' # Get running instances aws ec2 describe-instances | \ jq '.Reservations[].Instances[] | select(.State.Name == "running") | .InstanceId' # Format as table aws ec2 describe-instances | \ jq -r '["ID", "Type", "State"], (.Reservations[].Instances[] | [.InstanceId, .InstanceType, .State.Name]) | @tsv' | \ column -t Kubernetes 1 2 3 4 5 6 7 8 9 10 11 # Get pod names and statuses kubectl get pods -o json | \ jq -r '.items[] | [.metadata.name, .status.phase] | @tsv' # Find pods not running kubectl get pods -o json | \ jq '.items[] | select(.status.phase != "Running") | .metadata.name' # Get container images kubectl get pods -o json | \ jq -r '[.items[].spec.containers[].image] | unique | .[]' Output Formats 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 # Compact (one line) echo '{"a": 1}' | jq -c . # {"a":1} # Tab-separated echo '[{"a": 1, "b": 2}]' | jq -r '.[] | [.a, .b] | @tsv' # 1 2 # CSV echo '[{"a": 1, "b": 2}]' | jq -r '.[] | [.a, .b] | @csv' # 1,2 # URI encode echo '{"q": "hello world"}' | jq -r '.q | @uri' # hello%20world # Base64 echo '{"data": "hello"}' | jq -r '.data | @base64' # aGVsbG8= Quick Reference Pattern Description .field Get field .[] Iterate array .[0] First element .[-1] Last element select(cond) Filter map(expr) Transform array . + {} Add fields del(.field) Remove field // default Default value @tsv, @csv Output format -r Raw output -c Compact output -s Slurp (read all input as array) jq has a learning curve, but it pays off quickly. Once you internalize the patterns, you’ll wonder how you ever worked with JSON without it. Start with .field, .[].field, and select() — those three cover 80% of use cases. ...

February 24, 2026 Â· 7 min Â· 1338 words Â· Rob Washington

Systemd Service Hardening: Running Services Securely

Most systemd services run with full system access by default. That’s fine until one gets compromised. Systemd provides powerful sandboxing options that most people never use. The Basics: User and Group Never run services as root if they don’t need it: 1 2 3 [Service] User=myapp Group=myapp Create a dedicated user: 1 sudo useradd --system --no-create-home --shell /usr/sbin/nologin myapp Filesystem Restrictions Read-Only Root Make the entire filesystem read-only: ...

February 24, 2026 Â· 5 min Â· 961 words Â· Rob Washington

Git Bisect: Finding the Commit That Broke Everything

Something’s broken. It worked last week. Somewhere in the 47 commits since then, someone introduced a bug. You could check each commit manually, or you could let git do the work. git bisect performs a binary search through your commit history to find the exact commit that introduced a problem. Instead of checking 47 commits, you check about 6. The Basic Workflow 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 # Start bisecting git bisect start # Mark current commit as bad (has the bug) git bisect bad # Mark a known good commit (before the bug existed) git bisect good v1.2.0 # or git bisect good abc123 # Git checks out a commit halfway between good and bad # Test it, then tell git the result: git bisect good # This commit doesn't have the bug # or git bisect bad # This commit has the bug # Git narrows down and checks out another commit # Repeat until git finds the first bad commit # When done, return to original state git bisect reset Example Session 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 $ git bisect start $ git bisect bad # HEAD is broken $ git bisect good v2.0.0 # v2.0.0 worked fine Bisecting: 23 revisions left to test after this (roughly 5 steps) [abc123...] Add caching layer $ ./run-tests.sh Tests pass! $ git bisect good Bisecting: 11 revisions left to test after this (roughly 4 steps) [def456...] Refactor auth module $ ./run-tests.sh FAIL! $ git bisect bad Bisecting: 5 revisions left to test after this (roughly 3 steps) [ghi789...] Update dependencies # ... continue until: abc123def456789 is the first bad commit commit abc123def456789 Author: Someone <someone@example.com> Date: Mon Feb 20 14:30:00 2024 Fix edge case in login flow This commit introduced the bug! $ git bisect reset Automated Bisecting If you have a script that can test for the bug: ...

February 24, 2026 Â· 6 min Â· 1110 words Â· Rob Washington

Nginx Configuration Patterns: From Basic Proxy to Production Ready

Nginx sits in front of most web applications. It handles SSL, load balancing, static files, and proxying — all while being incredibly efficient. Here are the configurations you’ll actually use. Basic Structure 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 # /etc/nginx/nginx.conf user www-data; worker_processes auto; pid /run/nginx.pid; events { worker_connections 1024; multi_accept on; } http { # Basic settings sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; # Hide nginx version # MIME types include /etc/nginx/mime.types; default_type application/octet-stream; # Logging access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; # Gzip gzip on; gzip_vary on; gzip_min_length 1024; gzip_types text/plain text/css application/json application/javascript text/xml application/xml; # Virtual hosts include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Simple Reverse Proxy 1 2 3 4 5 6 7 8 9 10 11 12 13 14 # /etc/nginx/sites-available/myapp server { listen 80; server_name myapp.example.com; location / { proxy_pass http://localhost:3000; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } SSL with Let’s Encrypt After running certbot --nginx: ...

February 24, 2026 Â· 6 min Â· 1232 words Â· Rob Washington

Makefile Patterns: Task Running That Works Everywhere

Make has been around since 1976. It’s installed on virtually every Unix system. And while it was designed for compiling C programs, it’s become a universal task runner for any project. No npm, no pip, no cargo — just make. Why Make? Zero dependencies — Already on your system Declarative — Describe what you want, not how to get it Incremental — Only runs what’s needed Self-documenting — make help shows available targets Universal — Works the same on Linux, macOS, CI systems Basic Structure 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 # Variables APP_NAME := myapp VERSION := 1.0.0 # Default target (runs when you just type 'make') .DEFAULT_GOAL := help # Phony targets don't create files .PHONY: build test clean help build: go build -o $(APP_NAME) ./cmd/$(APP_NAME) test: go test ./... clean: rm -f $(APP_NAME) help: @echo "Available targets:" @echo " build - Build the application" @echo " test - Run tests" @echo " clean - Remove build artifacts" Self-Documenting Makefiles The best pattern: automatic help generation from comments. ...

February 24, 2026 Â· 8 min Â· 1544 words Â· Rob Washington

Terraform State Management: Avoiding the Footguns

Terraform state is both essential and dangerous. It’s how Terraform knows what exists, what changed, and what to do. Mismanage it, and you’ll either destroy production or spend hours untangling drift. What State Actually Is State is Terraform’s record of reality. It maps your configuration to real resources: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 { "resources": [ { "type": "aws_instance", "name": "web", "instances": [ { "attributes": { "id": "i-0abc123def456", "ami": "ami-12345678", "instance_type": "t3.medium" } } ] } ] } Without state, Terraform would: ...

February 24, 2026 Â· 7 min Â· 1386 words Â· Rob Washington

SSH Tips and Tricks: Beyond Basic Connections

SSH is the workhorse of remote access. But most people only scratch the surface. Here’s how to use it like a pro. SSH Config: Stop Typing So Much Instead of: 1 ssh -i ~/.ssh/prod-key.pem -p 2222 ubuntu@ec2-54-123-45-67.compute-1.amazonaws.com Create ~/.ssh/config: H H H o o o s s s t t t H U P I H U I F P U p o s o d s o s d o r s r s e r e t s e e r . o e o t r t n a t r n w i x r d N t g N t a n y a u 2 i i a d i r t J a m b 2 t n m e t d e u d e u 2 y g e p y A r m m n 2 F l F g n p i e t i s o i e a n c u l t y l n l b 2 e a e t a - g s 5 ~ i ~ y t 4 / n / e i - . g . s o 1 s . s n 2 s e s 3 h x h - / a / 4 p m s 5 r p t - o l a 6 d e g 7 - . i . k c n c e o g o y m - m . k p p e u e y t m . e p - e 1 m . a m a z o n a w s . c o m Now just: ...

February 24, 2026 Â· 8 min Â· 1594 words Â· Rob Washington