Debugging Production Issues Without Breaking Things

Production is sacred. When something breaks, you need to investigate without making it worse. Here’s how. Rule Zero: Don’t Make It Worse Before touching anything: Don’t restart services until you understand the problem Don’t deploy fixes without knowing the root cause Don’t clear logs you might need for investigation Don’t scale down what might be handling load Stabilize first, investigate second, fix third. Start With Observability Check Dashboards Before SSH-ing anywhere: ...

February 28, 2026 Â· 6 min Â· 1168 words Â· Rob Washington

Bash Scripting Patterns for Reliable Automation

Bash scripts glue systems together. Here’s how to write them without the usual fragility. Script Header Always start with: 1 2 3 4 5 6 #!/usr/bin/env bash set -euo pipefail # -e: Exit on error # -u: Error on undefined variables # -o pipefail: Fail if any pipe command fails Argument Parsing Simple Positional 1 2 3 4 5 6 7 8 9 #!/usr/bin/env bash set -euo pipefail if [[ $# -lt 1 ]]; then echo "Usage: $0 <filename>" >&2 exit 1 fi FILENAME="$1" With Options (getopts) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 #!/usr/bin/env bash set -euo pipefail usage() { echo "Usage: $0 [-v] [-o output] [-n count] input" echo " -v Verbose mode" echo " -o output Output file" echo " -n count Number of iterations" exit 1 } VERBOSE=false OUTPUT="" COUNT=1 while getopts "vo:n:h" opt; do case $opt in v) VERBOSE=true ;; o) OUTPUT="$OPTARG" ;; n) COUNT="$OPTARG" ;; h) usage ;; *) usage ;; esac done shift $((OPTIND - 1)) if [[ $# -lt 1 ]]; then usage fi INPUT="$1" Long Options 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 #!/usr/bin/env bash set -euo pipefail VERBOSE=false OUTPUT="" while [[ $# -gt 0 ]]; do case $1 in -v|--verbose) VERBOSE=true shift ;; -o|--output) OUTPUT="$2" shift 2 ;; -h|--help) usage ;; -*) echo "Unknown option: $1" >&2 exit 1 ;; *) break ;; esac done Error Handling Trap for Cleanup 1 2 3 4 5 6 7 8 9 10 11 12 13 14 #!/usr/bin/env bash set -euo pipefail TMPDIR="" cleanup() { if [[ -n "$TMPDIR" && -d "$TMPDIR" ]]; then rm -rf "$TMPDIR" fi } trap cleanup EXIT TMPDIR=$(mktemp -d) # Work with $TMPDIR - it's cleaned up on exit, error, or interrupt Custom Error Handler 1 2 3 4 5 6 7 8 9 10 11 12 13 14 #!/usr/bin/env bash set -euo pipefail error() { echo "Error: $1" >&2 exit "${2:-1}" } warn() { echo "Warning: $1" >&2 } # Usage [[ -f "$CONFIG" ]] || error "Config file not found: $CONFIG" Detailed Error Reporting 1 2 3 4 5 6 7 8 #!/usr/bin/env bash set -euo pipefail on_error() { echo "Error on line $1" >&2 exit 1 } trap 'on_error $LINENO' ERR Variables and Defaults 1 2 3 4 5 6 7 8 9 10 11 # Default value NAME="${1:-default}" # Error if unset NAME="${1:?Error: name required}" # Default only if unset (not empty) NAME="${NAME-default}" # Assign default if unset : "${NAME:=default}" String Operations 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 FILE="/path/to/file.txt" # Extract parts echo "${FILE##*/}" # file.txt (basename) echo "${FILE%/*}" # /path/to (dirname) echo "${FILE%.txt}" # /path/to/file (remove extension) echo "${FILE##*.}" # txt (extension only) # Replace echo "${FILE/path/new}" # /new/to/file.txt echo "${FILE//t/T}" # /paTh/To/file.TxT (all occurrences) # Case conversion echo "${FILE^^}" # Uppercase echo "${FILE,,}" # Lowercase # Length echo "${#FILE}" # String length Conditionals 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 # File tests [[ -f "$FILE" ]] # File exists [[ -d "$DIR" ]] # Directory exists [[ -r "$FILE" ]] # Readable [[ -w "$FILE" ]] # Writable [[ -x "$FILE" ]] # Executable [[ -s "$FILE" ]] # Non-empty # String tests [[ -z "$VAR" ]] # Empty [[ -n "$VAR" ]] # Non-empty [[ "$A" == "$B" ]] # Equal [[ "$A" != "$B" ]] # Not equal [[ "$A" =~ regex ]] # Regex match # Numeric tests [[ $A -eq $B ]] # Equal [[ $A -ne $B ]] # Not equal [[ $A -lt $B ]] # Less than [[ $A -le $B ]] # Less or equal [[ $A -gt $B ]] # Greater than [[ $A -ge $B ]] # Greater or equal # Logical [[ $A && $B ]] # And [[ $A || $B ]] # Or [[ ! $A ]] # Not Loops 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 # Over arguments for arg in "$@"; do echo "$arg" done # Over array arr=("one" "two" "three") for item in "${arr[@]}"; do echo "$item" done # C-style for ((i=0; i<10; i++)); do echo "$i" done # Over files for file in *.txt; do [[ -f "$file" ]] || continue echo "$file" done # Read lines from file while IFS= read -r line; do echo "$line" done < "$FILE" # Read lines from command while IFS= read -r line; do echo "$line" done < <(some_command) Functions 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 # Basic function greet() { local name="$1" echo "Hello, $name" } # With return value is_valid() { local input="$1" [[ "$input" =~ ^[0-9]+$ ]] } if is_valid "$value"; then echo "Valid" fi # Return data via stdout get_config() { cat /etc/myapp/config } CONFIG=$(get_config) # Local variables process() { local tmp tmp=$(mktemp) # tmp is local to this function } Arrays 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 # Create arr=("one" "two" "three") arr[3]="four" # Access echo "${arr[0]}" # First element echo "${arr[@]}" # All elements echo "${#arr[@]}" # Length echo "${arr[@]:1:2}" # Slice (start:length) # Add arr+=("five") # Iterate for item in "${arr[@]}"; do echo "$item" done # With index for i in "${!arr[@]}"; do echo "$i: ${arr[$i]}" done Associative Arrays 1 2 3 4 5 6 7 8 9 10 11 12 declare -A config config[host]="localhost" config[port]="8080" echo "${config[host]}" echo "${!config[@]}" # All keys echo "${config[@]}" # All values for key in "${!config[@]}"; do echo "$key: ${config[$key]}" done Process Substitution 1 2 3 4 5 6 7 # Compare two commands diff <(sort file1) <(sort file2) # Feed command output as file while read -r line; do echo "$line" done < <(curl -s "$URL") Subshells 1 2 3 4 5 6 7 8 9 10 11 12 # Run in subshell (changes don't affect parent) ( cd /tmp rm -f *.tmp ) # Still in original directory # Capture output result=$(command) # Capture with error result=$(command 2>&1) Here Documents 1 2 3 4 5 6 7 8 9 10 11 12 13 # Multi-line string cat << EOF This is a multi-line string with $VARIABLE expansion EOF # No variable expansion cat << 'EOF' This preserves $VARIABLE literally EOF # Here string grep "pattern" <<< "$string" Practical Patterns Check Dependencies 1 2 3 4 5 6 7 8 9 check_deps() { local deps=("curl" "jq" "git") for dep in "${deps[@]}"; do if ! command -v "$dep" &> /dev/null; then echo "Missing dependency: $dep" >&2 exit 1 fi done } Logging 1 2 3 4 5 6 7 LOG_FILE="/var/log/myapp.log" log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE" } log "Starting process" Confirmation Prompt 1 2 3 4 5 6 7 8 confirm() { read -rp "$1 [y/N] " response [[ "$response" =~ ^[Yy]$ ]] } if confirm "Delete all files?"; then rm -rf /tmp/data/* fi Retry Logic 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 retry() { local max_attempts="$1" local delay="$2" shift 2 local attempt=1 until "$@"; do if ((attempt >= max_attempts)); then echo "Failed after $attempt attempts" >&2 return 1 fi echo "Attempt $attempt failed, retrying in ${delay}s..." sleep "$delay" ((attempt++)) done } retry 3 5 curl -sf "$URL" Lock File 1 2 3 4 5 6 7 8 9 10 11 12 LOCKFILE="/var/lock/myapp.lock" acquire_lock() { exec 200>"$LOCKFILE" flock -n 200 || { echo "Another instance is running" >&2 exit 1 } } acquire_lock # Script continues only if lock acquired Complete Example 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 #!/usr/bin/env bash set -euo pipefail readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" readonly SCRIPT_NAME="$(basename "$0")" usage() { cat << EOF Usage: $SCRIPT_NAME [options] <input> Options: -o, --output FILE Output file (default: stdout) -v, --verbose Verbose output -h, --help Show this help Examples: $SCRIPT_NAME data.txt $SCRIPT_NAME -o result.txt -v input.txt EOF exit "${1:-0}" } log() { if [[ "$VERBOSE" == true ]]; then echo "[$(date '+%H:%M:%S')] $*" >&2 fi } error() { echo "Error: $*" >&2 exit 1 } # Defaults VERBOSE=false OUTPUT="/dev/stdout" # Parse arguments while [[ $# -gt 0 ]]; do case $1 in -o|--output) OUTPUT="$2"; shift 2 ;; -v|--verbose) VERBOSE=true; shift ;; -h|--help) usage ;; -*) error "Unknown option: $1" ;; *) break ;; esac done [[ $# -ge 1 ]] || usage 1 INPUT="$1" # Validate [[ -f "$INPUT" ]] || error "File not found: $INPUT" # Main logic log "Processing $INPUT" process_data < "$INPUT" > "$OUTPUT" log "Done" Bash scripts don’t have to be fragile. Apply these patterns and they’ll work reliably for years. ...

February 28, 2026 Â· 8 min Â· 1589 words Â· Rob Washington

Nginx Configuration Patterns for Web Applications

Nginx powers a huge portion of the web. Understanding its configuration patterns is essential for deploying modern applications. Basic Structure 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 # /etc/nginx/nginx.conf user www-data; worker_processes auto; pid /run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # Logging access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; # Performance sendfile on; keepalive_timeout 65; # Include site configs include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Static File Server 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 server { listen 80; server_name example.com; root /var/www/html; index index.html; location / { try_files $uri $uri/ =404; } # Cache static assets location ~* \.(css|js|png|jpg|jpeg|gif|ico|svg|woff2?)$ { expires 30d; add_header Cache-Control "public, immutable"; } } Reverse Proxy Basic Proxy 1 2 3 4 5 6 7 8 9 10 11 12 13 server { listen 80; server_name api.example.com; location / { proxy_pass http://localhost:3000; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } WebSocket Support 1 2 3 4 5 6 7 8 location /ws { proxy_pass http://localhost:3000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_read_timeout 86400; } Upstream (Multiple Backends) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 upstream backend { server 127.0.0.1:3001; server 127.0.0.1:3002; server 127.0.0.1:3003; keepalive 32; } server { location / { proxy_pass http://backend; proxy_http_version 1.1; proxy_set_header Connection ""; } } Load Balancing 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 upstream backend { # Round-robin (default) server app1:3000; server app2:3000; server app3:3000; } upstream backend_weighted { server app1:3000 weight=3; server app2:3000 weight=2; server app3:3000 weight=1; } upstream backend_ip_hash { ip_hash; # Sticky sessions by IP server app1:3000; server app2:3000; } upstream backend_least_conn { least_conn; # Send to least busy server app1:3000; server app2:3000; } Health Checks 1 2 3 4 5 upstream backend { server app1:3000 max_fails=3 fail_timeout=30s; server app2:3000 max_fails=3 fail_timeout=30s; server app3:3000 backup; # Only used when others fail } SSL/TLS Configuration 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 server { listen 443 ssl http2; server_name example.com; ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # Modern SSL settings ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256; ssl_prefer_server_ciphers off; # HSTS add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; # OCSP Stapling ssl_stapling on; ssl_stapling_verify on; resolver 8.8.8.8 8.8.4.4 valid=300s; } # Redirect HTTP to HTTPS server { listen 80; server_name example.com; return 301 https://$server_name$request_uri; } Security Headers 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 server { # Prevent clickjacking add_header X-Frame-Options "SAMEORIGIN" always; # Prevent MIME sniffing add_header X-Content-Type-Options "nosniff" always; # XSS protection add_header X-XSS-Protection "1; mode=block" always; # Referrer policy add_header Referrer-Policy "strict-origin-when-cross-origin" always; # Content Security Policy add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline';" always; } Rate Limiting 1 2 3 4 5 6 7 8 9 # Define rate limit zone limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s; server { location /api/ { limit_req zone=api burst=20 nodelay; proxy_pass http://backend; } } Connection Limiting 1 2 3 4 5 6 7 8 limit_conn_zone $binary_remote_addr zone=addr:10m; server { location /downloads/ { limit_conn addr 5; # 5 connections per IP limit_rate 100k; # 100KB/s per connection } } Caching Proxy Cache 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m max_size=1g inactive=60m; server { location / { proxy_cache cache; proxy_cache_valid 200 1h; proxy_cache_valid 404 1m; proxy_cache_use_stale error timeout updating; proxy_cache_background_update on; add_header X-Cache-Status $upstream_cache_status; proxy_pass http://backend; } } Cache Bypass 1 2 3 4 5 location / { proxy_cache cache; proxy_cache_bypass $http_cache_control; proxy_no_cache $arg_nocache; } Gzip Compression 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 http { gzip on; gzip_vary on; gzip_min_length 1024; gzip_proxied any; gzip_comp_level 6; gzip_types text/plain text/css text/xml text/javascript application/json application/javascript application/xml application/rss+xml image/svg+xml; } Location Matching 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 server { # Exact match location = /favicon.ico { log_not_found off; access_log off; } # Prefix match (case-sensitive) location /api/ { proxy_pass http://backend; } # Regex match (case-sensitive) location ~ \.php$ { fastcgi_pass unix:/var/run/php/php-fpm.sock; } # Regex match (case-insensitive) location ~* \.(jpg|jpeg|png|gif)$ { expires 30d; } # Prefix match (highest priority after exact) location ^~ /static/ { root /var/www; } } Priority: = > ^~ > ~ / ~* > prefix ...

February 28, 2026 Â· 6 min Â· 1082 words Â· Rob Washington

Python Patterns for Command-Line Scripts

Python is the go-to language for automation scripts. Here’s how to write CLI tools that are reliable and user-friendly. Basic Script Structure 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 #!/usr/bin/env python3 """One-line description of what this script does.""" import argparse import sys def main(): parser = argparse.ArgumentParser(description=__doc__) parser.add_argument('input', help='Input file path') parser.add_argument('-o', '--output', help='Output file path') parser.add_argument('-v', '--verbose', action='store_true') args = parser.parse_args() # Your logic here process(args.input, args.output, args.verbose) if __name__ == '__main__': main() Argument Parsing with argparse Positional Arguments 1 2 3 4 parser.add_argument('filename') # Required parser.add_argument('files', nargs='+') # One or more parser.add_argument('files', nargs='*') # Zero or more parser.add_argument('config', nargs='?') # Optional positional Optional Arguments 1 2 3 4 5 parser.add_argument('-v', '--verbose', action='store_true') parser.add_argument('-q', '--quiet', action='store_false', dest='verbose') parser.add_argument('-n', '--count', type=int, default=10) parser.add_argument('-f', '--format', choices=['json', 'csv', 'table']) parser.add_argument('--config', type=argparse.FileType('r')) Subcommands 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 parser = argparse.ArgumentParser() subparsers = parser.add_subparsers(dest='command', required=True) # 'init' command init_parser = subparsers.add_parser('init', help='Initialize project') init_parser.add_argument('--force', action='store_true') # 'run' command run_parser = subparsers.add_parser('run', help='Run the application') run_parser.add_argument('--port', type=int, default=8080) args = parser.parse_args() if args.command == 'init': do_init(args.force) elif args.command == 'run': do_run(args.port) Error Handling 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 import sys def main(): try: result = process() return 0 except FileNotFoundError as e: print(f"Error: File not found: {e.filename}", file=sys.stderr) return 1 except PermissionError: print("Error: Permission denied", file=sys.stderr) return 1 except KeyboardInterrupt: print("\nInterrupted", file=sys.stderr) return 130 except Exception as e: print(f"Error: {e}", file=sys.stderr) return 1 if __name__ == '__main__': sys.exit(main()) Logging 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 import logging def setup_logging(verbose=False): level = logging.DEBUG if verbose else logging.INFO logging.basicConfig( level=level, format='%(asctime)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S' ) def main(): args = parse_args() setup_logging(args.verbose) logging.info("Starting process") logging.debug("Detailed info here") logging.warning("Something might be wrong") logging.error("Something went wrong") Log to File and Console 1 2 3 4 5 6 7 8 9 10 11 def setup_logging(verbose=False, log_file=None): handlers = [logging.StreamHandler()] if log_file: handlers.append(logging.FileHandler(log_file)) logging.basicConfig( level=logging.DEBUG if verbose else logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', handlers=handlers ) Progress Indicators Simple Progress 1 2 3 4 5 6 7 8 import sys def process_items(items): total = len(items) for i, item in enumerate(items, 1): process(item) print(f"\rProcessing: {i}/{total}", end='', flush=True) print() # Newline at end With tqdm 1 2 3 4 5 6 7 8 9 10 from tqdm import tqdm for item in tqdm(items, desc="Processing"): process(item) # Or wrap any iterable with tqdm(total=100) as pbar: for i in range(100): do_work() pbar.update(1) Reading Input From File or Stdin 1 2 3 4 5 6 7 8 9 10 import sys def read_input(filepath=None): if filepath: with open(filepath) as f: return f.read() elif not sys.stdin.isatty(): return sys.stdin.read() else: raise ValueError("No input provided") Line by Line 1 2 3 4 5 import fileinput # Reads from files in args or stdin for line in fileinput.input(): process(line.strip()) Output Formatting JSON Output 1 2 3 4 5 6 7 import json def output_json(data, pretty=False): if pretty: print(json.dumps(data, indent=2, default=str)) else: print(json.dumps(data, default=str)) Table Output 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 def print_table(headers, rows): # Calculate column widths widths = [len(h) for h in headers] for row in rows: for i, cell in enumerate(row): widths[i] = max(widths[i], len(str(cell))) # Print header header_line = ' | '.join(h.ljust(widths[i]) for i, h in enumerate(headers)) print(header_line) print('-' * len(header_line)) # Print rows for row in rows: print(' | '.join(str(cell).ljust(widths[i]) for i, cell in enumerate(row))) With tabulate 1 2 3 4 5 6 7 from tabulate import tabulate data = [ ['Alice', 30, 'Engineer'], ['Bob', 25, 'Designer'], ] print(tabulate(data, headers=['Name', 'Age', 'Role'], tablefmt='grid')) Configuration Files YAML Config 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import yaml from pathlib import Path def load_config(config_path=None): paths = [ config_path, Path.home() / '.myapp.yaml', Path('/etc/myapp/config.yaml'), ] for path in paths: if path and Path(path).exists(): with open(path) as f: return yaml.safe_load(f) return {} # Defaults Environment Variables 1 2 3 4 5 6 7 8 import os def get_config(): return { 'api_key': os.environ.get('API_KEY'), 'debug': os.environ.get('DEBUG', '').lower() in ('true', '1', 'yes'), 'timeout': int(os.environ.get('TIMEOUT', '30')), } Running External Commands 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import subprocess def run_command(cmd, check=True): """Run command and return output.""" result = subprocess.run( cmd, shell=isinstance(cmd, str), capture_output=True, text=True, check=check ) return result.stdout.strip() # Usage output = run_command(['git', 'status', '--short']) output = run_command('ls -la | head -5') With Timeout 1 2 3 4 5 6 7 8 9 try: result = subprocess.run( ['slow-command'], timeout=30, capture_output=True, text=True ) except subprocess.TimeoutExpired: print("Command timed out") Temporary Files 1 2 3 4 5 6 7 8 9 10 11 12 13 import tempfile from pathlib import Path # Temporary file with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f: f.write('{"data": "value"}') temp_path = f.name # Temporary directory with tempfile.TemporaryDirectory() as tmpdir: work_file = Path(tmpdir) / 'work.txt' work_file.write_text('working...') # Directory deleted when context exits Path Handling 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 from pathlib import Path def process_files(directory): base = Path(directory) # Find files for path in base.glob('**/*.py'): print(f"Processing: {path}") # Path operations print(f" Name: {path.name}") print(f" Stem: {path.stem}") print(f" Suffix: {path.suffix}") print(f" Parent: {path.parent}") # Read/write content = path.read_text() path.with_suffix('.bak').write_text(content) Complete Example 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 #!/usr/bin/env python3 """Process log files and output statistics.""" import argparse import json import logging import sys from collections import Counter from pathlib import Path def setup_logging(verbose): logging.basicConfig( level=logging.DEBUG if verbose else logging.INFO, format='%(levelname)s: %(message)s' ) def parse_args(): parser = argparse.ArgumentParser( description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter ) parser.add_argument( 'logfiles', nargs='+', type=Path, help='Log files to process' ) parser.add_argument( '-o', '--output', type=argparse.FileType('w'), default=sys.stdout, help='Output file (default: stdout)' ) parser.add_argument( '-f', '--format', choices=['json', 'text'], default='text', help='Output format' ) parser.add_argument( '-v', '--verbose', action='store_true', help='Enable verbose output' ) return parser.parse_args() def analyze_logs(logfiles): stats = Counter() for logfile in logfiles: logging.info(f"Processing {logfile}") if not logfile.exists(): logging.warning(f"File not found: {logfile}") continue for line in logfile.read_text().splitlines(): if 'ERROR' in line: stats['errors'] += 1 elif 'WARNING' in line: stats['warnings'] += 1 stats['total'] += 1 return dict(stats) def output_results(stats, output, fmt): if fmt == 'json': json.dump(stats, output, indent=2) output.write('\n') else: for key, value in stats.items(): output.write(f"{key}: {value}\n") def main(): args = parse_args() setup_logging(args.verbose) try: stats = analyze_logs(args.logfiles) output_results(stats, args.output, args.format) return 0 except Exception as e: logging.error(f"Failed: {e}") return 1 if __name__ == '__main__': sys.exit(main()) Usage: ...

February 28, 2026 Â· 6 min Â· 1202 words Â· Rob Washington

Docker Compose Patterns for Local Development

Docker Compose turns “works on my machine” into “works everywhere.” Here’s how to structure it for real development workflows. Basic Structure 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 # docker-compose.yml services: app: build: . ports: - "3000:3000" volumes: - .:/app environment: - NODE_ENV=development db: image: postgres:15 environment: POSTGRES_PASSWORD: devpass Start everything: ...

February 28, 2026 Â· 6 min Â· 1088 words Â· Rob Washington

Git Workflow Patterns for Solo and Team Development

Git is powerful. It’s also easy to mess up. Here are workflows that keep repositories clean and teams productive. Solo Workflow For personal projects, keep it simple: 1 2 3 4 5 6 # Work on main, commit often git add -A git commit -m "Add user authentication" # Push when ready git push origin main Use branches for experiments: 1 2 3 4 git checkout -b experiment/new-ui # work... git checkout main git merge experiment/new-ui # or delete if failed Feature Branch Workflow The most common team pattern: ...

February 28, 2026 Â· 7 min Â· 1385 words Â· Rob Washington

rsync Patterns for Reliable Backups and Deployments

rsync is the standard for efficient file transfer. It only copies what changed, handles interruptions gracefully, and works over SSH. Here’s how to use it well. Basic Syntax 1 rsync [options] source destination The trailing slash matters: 1 2 rsync -av src/ dest/ # Contents of src into dest rsync -av src dest/ # Directory src into dest (creates dest/src/) Essential Options 1 2 3 4 5 6 -a, --archive # Archive mode (preserves permissions, timestamps, etc.) -v, --verbose # Show what's being transferred -z, --compress # Compress during transfer -P # Progress + partial (resume interrupted transfers) --delete # Remove files from dest that aren't in source -n, --dry-run # Show what would happen Common Patterns Local Backup 1 2 3 4 5 # Mirror directory rsync -av --delete /home/user/documents/ /backup/documents/ # Dry run first rsync -avn --delete /home/user/documents/ /backup/documents/ Remote Sync Over SSH 1 2 3 4 5 6 7 8 # Push to remote rsync -avz -e ssh /local/dir/ user@server:/remote/dir/ # Pull from remote rsync -avz -e ssh user@server:/remote/dir/ /local/dir/ # Custom SSH port rsync -avz -e "ssh -p 2222" /local/ user@server:/remote/ With Progress 1 2 3 4 5 # Single file progress rsync -avP largefile.zip server:/dest/ # Overall progress (rsync 3.1+) rsync -av --info=progress2 /source/ /dest/ Exclusions 1 2 3 4 5 6 7 8 # Exclude patterns rsync -av --exclude='*.log' --exclude='tmp/' /source/ /dest/ # Exclude from file rsync -av --exclude-from='exclude.txt' /source/ /dest/ # Include only certain files rsync -av --include='*.py' --exclude='*' /source/ /dest/ Example exclude file: ...

February 28, 2026 Â· 5 min Â· 988 words Â· Rob Washington

systemd Timers: The Modern Alternative to Cron

Cron works. It’s also from 1975. systemd timers offer logging integration, dependency handling, and more flexible scheduling. Here’s how to use them. Why Timers Over Cron? Logging: Output goes to journald automatically Dependencies: Wait for network, mounts, or other services Flexibility: Calendar events, monotonic timers, randomized delays Visibility: systemctl list-timers shows everything Consistency: Same management as other systemd units Basic Structure A timer needs two files: A .timer unit (the schedule) A .service unit (the job) Place them in /etc/systemd/system/ (system-wide) or ~/.config/systemd/user/ (user). ...

February 28, 2026 Â· 5 min Â· 944 words Â· Rob Washington

awk Patterns for Log Analysis and Text Processing

awk sits between grep and a full programming language. It’s perfect for columnar data, log files, and quick text transformations. The Basic Pattern 1 awk 'pattern { action }' file If pattern matches, run action. No pattern means every line. No action means print. 1 2 3 4 5 6 7 8 9 10 11 # Print everything awk '{ print }' file.txt # Print lines matching pattern awk '/error/' file.txt # Print second column awk '{ print $2 }' file.txt # Combined: errors, show timestamp and message awk '/error/ { print $1, $4 }' app.log Field Handling awk splits lines into fields by whitespace (default): ...

February 28, 2026 Â· 7 min Â· 1401 words Â· Rob Washington

curl Patterns for API Development and Testing

curl is the universal HTTP client. Every developer should know it well—for debugging, testing, and automation. Basic Requests 1 2 3 4 5 6 7 8 9 10 11 # GET (default) curl https://api.example.com/users # POST with data curl -X POST https://api.example.com/users \ -d '{"name":"alice"}' # With headers curl -H "Content-Type: application/json" \ -H "Authorization: Bearer token123" \ https://api.example.com/users JSON APIs 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 # POST JSON (sets Content-Type automatically) curl -X POST https://api.example.com/users \ --json '{"name":"alice","email":"alice@example.com"}' # Or explicitly curl -X POST https://api.example.com/users \ -H "Content-Type: application/json" \ -d '{"name":"alice"}' # Pretty print response curl -s https://api.example.com/users | jq . # From file curl -X POST https://api.example.com/users \ -H "Content-Type: application/json" \ -d @payload.json HTTP Methods 1 2 3 4 5 6 7 curl -X GET https://api.example.com/users/1 curl -X POST https://api.example.com/users -d '...' curl -X PUT https://api.example.com/users/1 -d '...' curl -X PATCH https://api.example.com/users/1 -d '...' curl -X DELETE https://api.example.com/users/1 curl -X HEAD https://api.example.com/users # Headers only curl -X OPTIONS https://api.example.com/users # CORS preflight Authentication 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 # Bearer token curl -H "Authorization: Bearer eyJhbGc..." \ https://api.example.com/me # Basic auth curl -u username:password https://api.example.com/users # Or curl -H "Authorization: Basic $(echo -n 'user:pass' | base64)" \ https://api.example.com/users # API key in header curl -H "X-API-Key: secret123" https://api.example.com/data # API key in query curl "https://api.example.com/data?api_key=secret123" Response Inspection 1 2 3 4 5 6 7 8 9 10 11 12 13 # Show headers curl -I https://api.example.com/users # HEAD request curl -i https://api.example.com/users # Include headers with body # Verbose (see full request/response) curl -v https://api.example.com/users # Just status code curl -s -o /dev/null -w "%{http_code}" https://api.example.com/users # Multiple stats curl -s -o /dev/null -w "code: %{http_code}\ntime: %{time_total}s\nsize: %{size_download} bytes\n" \ https://api.example.com/users File Uploads 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 # Form upload curl -X POST https://api.example.com/upload \ -F "file=@document.pdf" # Multiple files curl -X POST https://api.example.com/upload \ -F "file1=@doc1.pdf" \ -F "file2=@doc2.pdf" # With additional form fields curl -X POST https://api.example.com/upload \ -F "file=@photo.jpg" \ -F "description=Profile photo" \ -F "public=true" # Binary data curl -X POST https://api.example.com/upload \ -H "Content-Type: application/octet-stream" \ --data-binary @file.bin Handling Redirects 1 2 3 4 5 6 7 8 # Follow redirects curl -L https://example.com/shortened-url # Show redirect chain curl -L -v https://example.com/shortened-url 2>&1 | grep "< location" # Limit redirects curl -L --max-redirs 3 https://example.com/url Timeouts and Retries 1 2 3 4 5 6 7 8 9 10 11 # Connection timeout (seconds) curl --connect-timeout 5 https://api.example.com/ # Max time for entire operation curl --max-time 30 https://api.example.com/slow-endpoint # Retry on failure curl --retry 3 --retry-delay 2 https://api.example.com/ # Retry on specific errors curl --retry 3 --retry-all-errors https://api.example.com/ Saving Output 1 2 3 4 5 6 7 8 9 10 11 # Save to file curl -o response.json https://api.example.com/users # Use remote filename curl -O https://example.com/file.zip # Save headers separately curl -D headers.txt -o body.json https://api.example.com/users # Append to file curl https://api.example.com/users >> all_responses.json Cookie Handling 1 2 3 4 5 6 7 8 9 10 11 12 # Send cookies curl -b "session=abc123; token=xyz" https://api.example.com/ # Save cookies to file curl -c cookies.txt https://api.example.com/login \ -d "user=admin&pass=secret" # Use saved cookies curl -b cookies.txt https://api.example.com/dashboard # Both save and send curl -b cookies.txt -c cookies.txt https://api.example.com/ Testing Webhooks 1 2 3 4 5 6 7 8 9 10 11 12 # Simulate GitHub webhook curl -X POST http://localhost:3000/webhook \ -H "Content-Type: application/json" \ -H "X-GitHub-Event: push" \ -H "X-Hub-Signature-256: sha256=..." \ -d @github-payload.json # Stripe webhook curl -X POST http://localhost:3000/stripe/webhook \ -H "Content-Type: application/json" \ -H "Stripe-Signature: t=...,v1=..." \ -d @stripe-event.json SSL/TLS Options 1 2 3 4 5 6 7 8 # Skip certificate verification (development only!) curl -k https://self-signed.example.com/ # Use specific CA bundle curl --cacert /path/to/ca-bundle.crt https://api.example.com/ # Client certificate curl --cert client.crt --key client.key https://mtls.example.com/ Proxy Configuration 1 2 3 4 5 6 7 8 # HTTP proxy curl -x http://proxy:8080 https://api.example.com/ # SOCKS5 proxy curl --socks5 localhost:1080 https://api.example.com/ # No proxy for specific hosts curl --noproxy "localhost,*.internal" https://api.example.com/ Performance Testing 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 # Time breakdown curl -s -o /dev/null -w "\ namelookup: %{time_namelookup}s\n\ connect: %{time_connect}s\n\ appconnect: %{time_appconnect}s\n\ pretransfer: %{time_pretransfer}s\n\ redirect: %{time_redirect}s\n\ starttransfer: %{time_starttransfer}s\n\ total: %{time_total}s\n" \ https://api.example.com/ # Loop for load testing (basic) for i in {1..100}; do curl -s -o /dev/null -w "%{http_code} %{time_total}\n" \ https://api.example.com/ done Config Files Create ~/.curlrc for defaults: ...

February 28, 2026 Â· 6 min Â· 1116 words Â· Rob Washington