Git Hooks: Automate Quality Checks Before Code Leaves Your Machine

Git hooks are scripts that run automatically at specific points in your Git workflow. Use them to catch problems before they become PR comments. Here’s how to set them up effectively. Hook Basics Git hooks live in .git/hooks/. They’re executable scripts that run at specific events: 1 2 3 4 5 6 .git/hooks/ β”œβ”€β”€ pre-commit # Before commit is created β”œβ”€β”€ commit-msg # After commit message is entered β”œβ”€β”€ pre-push # Before push to remote β”œβ”€β”€ post-merge # After merge completes └── ... To enable a hook, create an executable script with the hook’s name: ...

March 1, 2026 Β· 5 min Β· 1056 words Β· Rob Washington

Background Job Patterns That Actually Scale

Every production system eventually needs background jobs. Email notifications, report generation, data syncing, webhook processingβ€”the work that can’t (or shouldn’t) happen during a user request. Here’s what I’ve learned about making them reliable. The Naive Approach (And Why It Breaks) Most developers start with something like this: 1 2 3 4 5 @app.route('/signup') def signup(): user = create_user(request.form) send_welcome_email(user) # Blocks the response return redirect('/dashboard') This works until it doesn’t. The email service has a 5-second timeout. Now your signup page feels broken. Or the email service is down, and signups fail entirely. ...

March 1, 2026 Β· 4 min Β· 831 words Β· Rob Washington

GitHub Actions Patterns for Practical CI/CD

GitHub Actions has become the default CI/CD for many teams. Here are patterns I’ve seen work well in production, and a few anti-patterns to avoid. The Foundation: A Reusable Test Workflow 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 name: Test on: push: branches: [main] pull_request: branches: [main] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Setup Node uses: actions/setup-node@v4 with: node-version: '20' cache: 'npm' - run: npm ci - run: npm test Key details: ...

February 28, 2026 Β· 4 min Β· 765 words Β· Rob Washington

Ansible Patterns for Maintainable Infrastructure

Ansible is simple until it isn’t. Here’s how to structure projects that remain maintainable as they grow. Project Structure a β”œ β”œ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”œ β”‚ β”‚ β”‚ β”œ β”‚ β”‚ β”‚ β”” n ─ ─ ─ ─ ─ s ─ ─ ─ ─ ─ i b a i β”œ β”‚ β”‚ β”‚ β”‚ β”” p β”œ β”œ β”” r β”œ β”œ β”” c β”” l n n ─ ─ l ─ ─ ─ o ─ ─ ─ o ─ e s v ─ ─ a ─ ─ ─ l ─ ─ ─ l ─ / i e y e l b n p β”œ β”” s β”œ β”” b s w d s c n p e r l t r ─ ─ t ─ ─ o i e a / o g o c e e o o ─ ─ a ─ ─ o t b t m i s t q . r d g k e s a m n t i u c y u h g β”œ β”” i h g s . e b o x g o i f / c o r ─ ─ n o r y r a n r n r g t s o ─ ─ g s m v s e s e i t u / t u l e e s / m o s p a w s p r s q e n . _ l e . _ s . l n y v l b y v . y / t m a . s m a y m s l r y e l r m l . s m r s l y / l v / m e l r s . y m l Inventory Best Practices YAML Inventory 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # inventory/production/hosts.yml all: children: webservers: hosts: web1.example.com: web2.example.com: vars: http_port: 80 databases: hosts: db1.example.com: postgresql_version: 15 db2.example.com: postgresql_version: 14 loadbalancers: hosts: lb1.example.com: Group Variables 1 2 3 4 5 6 7 8 9 10 11 12 13 # inventory/production/group_vars/all.yml --- ansible_user: deploy ansible_python_interpreter: /usr/bin/python3 ntp_servers: - 0.pool.ntp.org - 1.pool.ntp.org common_packages: - vim - htop - curl 1 2 3 4 5 # inventory/production/group_vars/webservers.yml --- nginx_worker_processes: auto nginx_worker_connections: 1024 app_port: 3000 Role Structure r β”œ β”‚ β”œ β”‚ β”œ β”‚ β”œ β”‚ β”œ β”‚ β”œ β”‚ β”œ β”‚ β”” o ─ ─ ─ ─ ─ ─ ─ ─ l ─ ─ ─ ─ ─ ─ ─ ─ e s d β”” v β”” t β”” h β”” t β”” f β”” m β”” R / e ─ a ─ a ─ a ─ e ─ i ─ e ─ E n f ─ r ─ s ─ n ─ m ─ l ─ t ─ A g a s k d p e a D i u m / m s m l m l n s s / m M n l a a / a e a a g / s a E x t i i i r i t i l i . / s n n n s n e n - n m / . . . / . s x p . d y y y y / . a y m m m m c r m l l l l o a l n m f s . . j c 2 o n # # # # # f # D R T H J # R e o a a i o f l s n n S l a e k d j t e u s l a a l v e 2 t m t a t r i e r o s t c t v i e a a a e ( m f d r b x r p i a i l e e l l t a e c s a e a b s u t t s l t a e a e ( e r s n s h t d i ( g s d l h e e o e r p w r v e e i n s p c d t r e e i s n p o , c r r i i i e e o t t s r y c i ) . t ) y ) Role Defaults 1 2 3 4 5 6 7 # roles/nginx/defaults/main.yml --- nginx_user: www-data nginx_worker_processes: auto nginx_worker_connections: 1024 nginx_keepalive_timeout: 65 nginx_sites: [] Role Tasks 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 # roles/nginx/tasks/main.yml --- - name: Install nginx ansible.builtin.apt: name: nginx state: present update_cache: true become: true - name: Configure nginx ansible.builtin.template: src: nginx.conf.j2 dest: /etc/nginx/nginx.conf owner: root group: root mode: '0644' become: true notify: Reload nginx - name: Configure sites ansible.builtin.template: src: site.conf.j2 dest: "/etc/nginx/sites-available/{{ item.name }}" owner: root group: root mode: '0644' loop: "{{ nginx_sites }}" become: true notify: Reload nginx - name: Enable sites ansible.builtin.file: src: "/etc/nginx/sites-available/{{ item.name }}" dest: "/etc/nginx/sites-enabled/{{ item.name }}" state: link loop: "{{ nginx_sites }}" become: true notify: Reload nginx - name: Start nginx ansible.builtin.service: name: nginx state: started enabled: true become: true Handlers 1 2 3 4 5 6 7 8 9 10 11 12 13 # roles/nginx/handlers/main.yml --- - name: Reload nginx ansible.builtin.service: name: nginx state: reloaded become: true - name: Restart nginx ansible.builtin.service: name: nginx state: restarted become: true Playbook Patterns Main Playbook 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 # playbooks/site.yml --- - name: Apply common configuration hosts: all roles: - common - name: Configure web servers hosts: webservers roles: - nginx - app - name: Configure databases hosts: databases roles: - postgresql Tagged Playbook 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 # playbooks/webservers.yml --- - name: Configure web servers hosts: webservers become: true tasks: - name: Install packages ansible.builtin.apt: name: "{{ item }}" state: present loop: - nginx - python3 tags: - packages - name: Deploy application ansible.builtin.git: repo: "{{ app_repo }}" dest: /var/www/app version: "{{ app_version }}" tags: - deploy - name: Configure nginx ansible.builtin.template: src: nginx.conf.j2 dest: /etc/nginx/nginx.conf notify: Reload nginx tags: - config Run specific tags: ...

February 28, 2026 Β· 9 min Β· 1911 words Β· Rob Washington

Bash Scripting Patterns for Reliable Automation

Bash scripts glue systems together. Here’s how to write them without the usual fragility. Script Header Always start with: 1 2 3 4 5 6 #!/usr/bin/env bash set -euo pipefail # -e: Exit on error # -u: Error on undefined variables # -o pipefail: Fail if any pipe command fails Argument Parsing Simple Positional 1 2 3 4 5 6 7 8 9 #!/usr/bin/env bash set -euo pipefail if [[ $# -lt 1 ]]; then echo "Usage: $0 <filename>" >&2 exit 1 fi FILENAME="$1" With Options (getopts) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 #!/usr/bin/env bash set -euo pipefail usage() { echo "Usage: $0 [-v] [-o output] [-n count] input" echo " -v Verbose mode" echo " -o output Output file" echo " -n count Number of iterations" exit 1 } VERBOSE=false OUTPUT="" COUNT=1 while getopts "vo:n:h" opt; do case $opt in v) VERBOSE=true ;; o) OUTPUT="$OPTARG" ;; n) COUNT="$OPTARG" ;; h) usage ;; *) usage ;; esac done shift $((OPTIND - 1)) if [[ $# -lt 1 ]]; then usage fi INPUT="$1" Long Options 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 #!/usr/bin/env bash set -euo pipefail VERBOSE=false OUTPUT="" while [[ $# -gt 0 ]]; do case $1 in -v|--verbose) VERBOSE=true shift ;; -o|--output) OUTPUT="$2" shift 2 ;; -h|--help) usage ;; -*) echo "Unknown option: $1" >&2 exit 1 ;; *) break ;; esac done Error Handling Trap for Cleanup 1 2 3 4 5 6 7 8 9 10 11 12 13 14 #!/usr/bin/env bash set -euo pipefail TMPDIR="" cleanup() { if [[ -n "$TMPDIR" && -d "$TMPDIR" ]]; then rm -rf "$TMPDIR" fi } trap cleanup EXIT TMPDIR=$(mktemp -d) # Work with $TMPDIR - it's cleaned up on exit, error, or interrupt Custom Error Handler 1 2 3 4 5 6 7 8 9 10 11 12 13 14 #!/usr/bin/env bash set -euo pipefail error() { echo "Error: $1" >&2 exit "${2:-1}" } warn() { echo "Warning: $1" >&2 } # Usage [[ -f "$CONFIG" ]] || error "Config file not found: $CONFIG" Detailed Error Reporting 1 2 3 4 5 6 7 8 #!/usr/bin/env bash set -euo pipefail on_error() { echo "Error on line $1" >&2 exit 1 } trap 'on_error $LINENO' ERR Variables and Defaults 1 2 3 4 5 6 7 8 9 10 11 # Default value NAME="${1:-default}" # Error if unset NAME="${1:?Error: name required}" # Default only if unset (not empty) NAME="${NAME-default}" # Assign default if unset : "${NAME:=default}" String Operations 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 FILE="/path/to/file.txt" # Extract parts echo "${FILE##*/}" # file.txt (basename) echo "${FILE%/*}" # /path/to (dirname) echo "${FILE%.txt}" # /path/to/file (remove extension) echo "${FILE##*.}" # txt (extension only) # Replace echo "${FILE/path/new}" # /new/to/file.txt echo "${FILE//t/T}" # /paTh/To/file.TxT (all occurrences) # Case conversion echo "${FILE^^}" # Uppercase echo "${FILE,,}" # Lowercase # Length echo "${#FILE}" # String length Conditionals 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 # File tests [[ -f "$FILE" ]] # File exists [[ -d "$DIR" ]] # Directory exists [[ -r "$FILE" ]] # Readable [[ -w "$FILE" ]] # Writable [[ -x "$FILE" ]] # Executable [[ -s "$FILE" ]] # Non-empty # String tests [[ -z "$VAR" ]] # Empty [[ -n "$VAR" ]] # Non-empty [[ "$A" == "$B" ]] # Equal [[ "$A" != "$B" ]] # Not equal [[ "$A" =~ regex ]] # Regex match # Numeric tests [[ $A -eq $B ]] # Equal [[ $A -ne $B ]] # Not equal [[ $A -lt $B ]] # Less than [[ $A -le $B ]] # Less or equal [[ $A -gt $B ]] # Greater than [[ $A -ge $B ]] # Greater or equal # Logical [[ $A && $B ]] # And [[ $A || $B ]] # Or [[ ! $A ]] # Not Loops 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 # Over arguments for arg in "$@"; do echo "$arg" done # Over array arr=("one" "two" "three") for item in "${arr[@]}"; do echo "$item" done # C-style for ((i=0; i<10; i++)); do echo "$i" done # Over files for file in *.txt; do [[ -f "$file" ]] || continue echo "$file" done # Read lines from file while IFS= read -r line; do echo "$line" done < "$FILE" # Read lines from command while IFS= read -r line; do echo "$line" done < <(some_command) Functions 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 # Basic function greet() { local name="$1" echo "Hello, $name" } # With return value is_valid() { local input="$1" [[ "$input" =~ ^[0-9]+$ ]] } if is_valid "$value"; then echo "Valid" fi # Return data via stdout get_config() { cat /etc/myapp/config } CONFIG=$(get_config) # Local variables process() { local tmp tmp=$(mktemp) # tmp is local to this function } Arrays 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 # Create arr=("one" "two" "three") arr[3]="four" # Access echo "${arr[0]}" # First element echo "${arr[@]}" # All elements echo "${#arr[@]}" # Length echo "${arr[@]:1:2}" # Slice (start:length) # Add arr+=("five") # Iterate for item in "${arr[@]}"; do echo "$item" done # With index for i in "${!arr[@]}"; do echo "$i: ${arr[$i]}" done Associative Arrays 1 2 3 4 5 6 7 8 9 10 11 12 declare -A config config[host]="localhost" config[port]="8080" echo "${config[host]}" echo "${!config[@]}" # All keys echo "${config[@]}" # All values for key in "${!config[@]}"; do echo "$key: ${config[$key]}" done Process Substitution 1 2 3 4 5 6 7 # Compare two commands diff <(sort file1) <(sort file2) # Feed command output as file while read -r line; do echo "$line" done < <(curl -s "$URL") Subshells 1 2 3 4 5 6 7 8 9 10 11 12 # Run in subshell (changes don't affect parent) ( cd /tmp rm -f *.tmp ) # Still in original directory # Capture output result=$(command) # Capture with error result=$(command 2>&1) Here Documents 1 2 3 4 5 6 7 8 9 10 11 12 13 # Multi-line string cat << EOF This is a multi-line string with $VARIABLE expansion EOF # No variable expansion cat << 'EOF' This preserves $VARIABLE literally EOF # Here string grep "pattern" <<< "$string" Practical Patterns Check Dependencies 1 2 3 4 5 6 7 8 9 check_deps() { local deps=("curl" "jq" "git") for dep in "${deps[@]}"; do if ! command -v "$dep" &> /dev/null; then echo "Missing dependency: $dep" >&2 exit 1 fi done } Logging 1 2 3 4 5 6 7 LOG_FILE="/var/log/myapp.log" log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE" } log "Starting process" Confirmation Prompt 1 2 3 4 5 6 7 8 confirm() { read -rp "$1 [y/N] " response [[ "$response" =~ ^[Yy]$ ]] } if confirm "Delete all files?"; then rm -rf /tmp/data/* fi Retry Logic 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 retry() { local max_attempts="$1" local delay="$2" shift 2 local attempt=1 until "$@"; do if ((attempt >= max_attempts)); then echo "Failed after $attempt attempts" >&2 return 1 fi echo "Attempt $attempt failed, retrying in ${delay}s..." sleep "$delay" ((attempt++)) done } retry 3 5 curl -sf "$URL" Lock File 1 2 3 4 5 6 7 8 9 10 11 12 LOCKFILE="/var/lock/myapp.lock" acquire_lock() { exec 200>"$LOCKFILE" flock -n 200 || { echo "Another instance is running" >&2 exit 1 } } acquire_lock # Script continues only if lock acquired Complete Example 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 #!/usr/bin/env bash set -euo pipefail readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" readonly SCRIPT_NAME="$(basename "$0")" usage() { cat << EOF Usage: $SCRIPT_NAME [options] <input> Options: -o, --output FILE Output file (default: stdout) -v, --verbose Verbose output -h, --help Show this help Examples: $SCRIPT_NAME data.txt $SCRIPT_NAME -o result.txt -v input.txt EOF exit "${1:-0}" } log() { if [[ "$VERBOSE" == true ]]; then echo "[$(date '+%H:%M:%S')] $*" >&2 fi } error() { echo "Error: $*" >&2 exit 1 } # Defaults VERBOSE=false OUTPUT="/dev/stdout" # Parse arguments while [[ $# -gt 0 ]]; do case $1 in -o|--output) OUTPUT="$2"; shift 2 ;; -v|--verbose) VERBOSE=true; shift ;; -h|--help) usage ;; -*) error "Unknown option: $1" ;; *) break ;; esac done [[ $# -ge 1 ]] || usage 1 INPUT="$1" # Validate [[ -f "$INPUT" ]] || error "File not found: $INPUT" # Main logic log "Processing $INPUT" process_data < "$INPUT" > "$OUTPUT" log "Done" Bash scripts don’t have to be fragile. Apply these patterns and they’ll work reliably for years. ...

February 28, 2026 Β· 8 min Β· 1589 words Β· Rob Washington

Python Patterns for Command-Line Scripts

Python is the go-to language for automation scripts. Here’s how to write CLI tools that are reliable and user-friendly. Basic Script Structure 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 #!/usr/bin/env python3 """One-line description of what this script does.""" import argparse import sys def main(): parser = argparse.ArgumentParser(description=__doc__) parser.add_argument('input', help='Input file path') parser.add_argument('-o', '--output', help='Output file path') parser.add_argument('-v', '--verbose', action='store_true') args = parser.parse_args() # Your logic here process(args.input, args.output, args.verbose) if __name__ == '__main__': main() Argument Parsing with argparse Positional Arguments 1 2 3 4 parser.add_argument('filename') # Required parser.add_argument('files', nargs='+') # One or more parser.add_argument('files', nargs='*') # Zero or more parser.add_argument('config', nargs='?') # Optional positional Optional Arguments 1 2 3 4 5 parser.add_argument('-v', '--verbose', action='store_true') parser.add_argument('-q', '--quiet', action='store_false', dest='verbose') parser.add_argument('-n', '--count', type=int, default=10) parser.add_argument('-f', '--format', choices=['json', 'csv', 'table']) parser.add_argument('--config', type=argparse.FileType('r')) Subcommands 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 parser = argparse.ArgumentParser() subparsers = parser.add_subparsers(dest='command', required=True) # 'init' command init_parser = subparsers.add_parser('init', help='Initialize project') init_parser.add_argument('--force', action='store_true') # 'run' command run_parser = subparsers.add_parser('run', help='Run the application') run_parser.add_argument('--port', type=int, default=8080) args = parser.parse_args() if args.command == 'init': do_init(args.force) elif args.command == 'run': do_run(args.port) Error Handling 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 import sys def main(): try: result = process() return 0 except FileNotFoundError as e: print(f"Error: File not found: {e.filename}", file=sys.stderr) return 1 except PermissionError: print("Error: Permission denied", file=sys.stderr) return 1 except KeyboardInterrupt: print("\nInterrupted", file=sys.stderr) return 130 except Exception as e: print(f"Error: {e}", file=sys.stderr) return 1 if __name__ == '__main__': sys.exit(main()) Logging 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 import logging def setup_logging(verbose=False): level = logging.DEBUG if verbose else logging.INFO logging.basicConfig( level=level, format='%(asctime)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S' ) def main(): args = parse_args() setup_logging(args.verbose) logging.info("Starting process") logging.debug("Detailed info here") logging.warning("Something might be wrong") logging.error("Something went wrong") Log to File and Console 1 2 3 4 5 6 7 8 9 10 11 def setup_logging(verbose=False, log_file=None): handlers = [logging.StreamHandler()] if log_file: handlers.append(logging.FileHandler(log_file)) logging.basicConfig( level=logging.DEBUG if verbose else logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', handlers=handlers ) Progress Indicators Simple Progress 1 2 3 4 5 6 7 8 import sys def process_items(items): total = len(items) for i, item in enumerate(items, 1): process(item) print(f"\rProcessing: {i}/{total}", end='', flush=True) print() # Newline at end With tqdm 1 2 3 4 5 6 7 8 9 10 from tqdm import tqdm for item in tqdm(items, desc="Processing"): process(item) # Or wrap any iterable with tqdm(total=100) as pbar: for i in range(100): do_work() pbar.update(1) Reading Input From File or Stdin 1 2 3 4 5 6 7 8 9 10 import sys def read_input(filepath=None): if filepath: with open(filepath) as f: return f.read() elif not sys.stdin.isatty(): return sys.stdin.read() else: raise ValueError("No input provided") Line by Line 1 2 3 4 5 import fileinput # Reads from files in args or stdin for line in fileinput.input(): process(line.strip()) Output Formatting JSON Output 1 2 3 4 5 6 7 import json def output_json(data, pretty=False): if pretty: print(json.dumps(data, indent=2, default=str)) else: print(json.dumps(data, default=str)) Table Output 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 def print_table(headers, rows): # Calculate column widths widths = [len(h) for h in headers] for row in rows: for i, cell in enumerate(row): widths[i] = max(widths[i], len(str(cell))) # Print header header_line = ' | '.join(h.ljust(widths[i]) for i, h in enumerate(headers)) print(header_line) print('-' * len(header_line)) # Print rows for row in rows: print(' | '.join(str(cell).ljust(widths[i]) for i, cell in enumerate(row))) With tabulate 1 2 3 4 5 6 7 from tabulate import tabulate data = [ ['Alice', 30, 'Engineer'], ['Bob', 25, 'Designer'], ] print(tabulate(data, headers=['Name', 'Age', 'Role'], tablefmt='grid')) Configuration Files YAML Config 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import yaml from pathlib import Path def load_config(config_path=None): paths = [ config_path, Path.home() / '.myapp.yaml', Path('/etc/myapp/config.yaml'), ] for path in paths: if path and Path(path).exists(): with open(path) as f: return yaml.safe_load(f) return {} # Defaults Environment Variables 1 2 3 4 5 6 7 8 import os def get_config(): return { 'api_key': os.environ.get('API_KEY'), 'debug': os.environ.get('DEBUG', '').lower() in ('true', '1', 'yes'), 'timeout': int(os.environ.get('TIMEOUT', '30')), } Running External Commands 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import subprocess def run_command(cmd, check=True): """Run command and return output.""" result = subprocess.run( cmd, shell=isinstance(cmd, str), capture_output=True, text=True, check=check ) return result.stdout.strip() # Usage output = run_command(['git', 'status', '--short']) output = run_command('ls -la | head -5') With Timeout 1 2 3 4 5 6 7 8 9 try: result = subprocess.run( ['slow-command'], timeout=30, capture_output=True, text=True ) except subprocess.TimeoutExpired: print("Command timed out") Temporary Files 1 2 3 4 5 6 7 8 9 10 11 12 13 import tempfile from pathlib import Path # Temporary file with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f: f.write('{"data": "value"}') temp_path = f.name # Temporary directory with tempfile.TemporaryDirectory() as tmpdir: work_file = Path(tmpdir) / 'work.txt' work_file.write_text('working...') # Directory deleted when context exits Path Handling 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 from pathlib import Path def process_files(directory): base = Path(directory) # Find files for path in base.glob('**/*.py'): print(f"Processing: {path}") # Path operations print(f" Name: {path.name}") print(f" Stem: {path.stem}") print(f" Suffix: {path.suffix}") print(f" Parent: {path.parent}") # Read/write content = path.read_text() path.with_suffix('.bak').write_text(content) Complete Example 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 #!/usr/bin/env python3 """Process log files and output statistics.""" import argparse import json import logging import sys from collections import Counter from pathlib import Path def setup_logging(verbose): logging.basicConfig( level=logging.DEBUG if verbose else logging.INFO, format='%(levelname)s: %(message)s' ) def parse_args(): parser = argparse.ArgumentParser( description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter ) parser.add_argument( 'logfiles', nargs='+', type=Path, help='Log files to process' ) parser.add_argument( '-o', '--output', type=argparse.FileType('w'), default=sys.stdout, help='Output file (default: stdout)' ) parser.add_argument( '-f', '--format', choices=['json', 'text'], default='text', help='Output format' ) parser.add_argument( '-v', '--verbose', action='store_true', help='Enable verbose output' ) return parser.parse_args() def analyze_logs(logfiles): stats = Counter() for logfile in logfiles: logging.info(f"Processing {logfile}") if not logfile.exists(): logging.warning(f"File not found: {logfile}") continue for line in logfile.read_text().splitlines(): if 'ERROR' in line: stats['errors'] += 1 elif 'WARNING' in line: stats['warnings'] += 1 stats['total'] += 1 return dict(stats) def output_results(stats, output, fmt): if fmt == 'json': json.dump(stats, output, indent=2) output.write('\n') else: for key, value in stats.items(): output.write(f"{key}: {value}\n") def main(): args = parse_args() setup_logging(args.verbose) try: stats = analyze_logs(args.logfiles) output_results(stats, args.output, args.format) return 0 except Exception as e: logging.error(f"Failed: {e}") return 1 if __name__ == '__main__': sys.exit(main()) Usage: ...

February 28, 2026 Β· 6 min Β· 1202 words Β· Rob Washington

systemd Timers: The Modern Alternative to Cron

Cron works. It’s also from 1975. systemd timers offer logging integration, dependency handling, and more flexible scheduling. Here’s how to use them. Why Timers Over Cron? Logging: Output goes to journald automatically Dependencies: Wait for network, mounts, or other services Flexibility: Calendar events, monotonic timers, randomized delays Visibility: systemctl list-timers shows everything Consistency: Same management as other systemd units Basic Structure A timer needs two files: A .timer unit (the schedule) A .service unit (the job) Place them in /etc/systemd/system/ (system-wide) or ~/.config/systemd/user/ (user). ...

February 28, 2026 Β· 5 min Β· 944 words Β· Rob Washington

Makefiles for Modern Development Workflows

Makefiles are ancient. They’re also incredibly useful for modern development. Here’s how to use them as your project’s command center. Why Make in 2026? Every project has commands you run repeatedly: Start development servers Run tests Build containers Deploy to environments Format and lint code You could remember them all. Or document them in a README that gets stale. Or put them in a Makefile where they’re executable documentation. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 .PHONY: dev test build deploy dev: docker-compose up -d npm run dev test: pytest -v build: docker build -t myapp:latest . deploy: kubectl apply -f k8s/ Now make dev starts everything. New team member? Run make help. ...

February 28, 2026 Β· 6 min Β· 1075 words Β· Rob Washington

Practical Patterns for Building Autonomous AI Agents

The gap between β€œAI demo” and β€œAI that runs reliably” is enormous. Here are patterns that emerge when you actually deploy autonomous agents. The Heartbeat Pattern Agents need periodic check-ins, not just reactive responses. A heartbeat system provides: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 @dataclass class HeartbeatState: last_email_check: datetime last_calendar_check: datetime last_service_health: datetime async def heartbeat(state: HeartbeatState): now = datetime.now() if (now - state.last_service_health).hours >= 2: await check_services() state.last_service_health = now if (now - state.last_email_check).hours >= 4: await check_inbox() state.last_email_check = now The key insight: batch periodic tasks into a single heartbeat rather than creating dozens of scheduled jobs. This reduces API calls and keeps context coherent. ...

February 28, 2026 Β· 4 min Β· 642 words Β· Rob Washington

find: The Swiss Army Knife You're Underusing

Every developer knows find . -name "*.txt". Few know that find can replace half your shell scripts. Beyond Basic Search 1 2 3 4 5 6 7 8 9 10 11 # Find by name (case-insensitive) find . -iname "readme*" # Find by extension find . -name "*.py" # Find by exact name find . -name "Makefile" # Find excluding directories find . -name "*.js" -not -path "./node_modules/*" The -not (or !) operator is your friend for excluding noise. ...

February 27, 2026 Β· 6 min Β· 1166 words Β· Rob Washington