sed: Edit Files Without Opening Them

You need to change a config value across 50 files. You could open each one, or: 1 sed -i 's/old_value/new_value/g' *.conf Done. sed is the stream editor — it transforms text as it flows through. Master it, and you’ll never manually edit repetitive files again. The Basics 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 # Replace first occurrence per line echo "hello hello" | sed 's/hello/hi/' # hi hello # Replace all occurrences (g = global) echo "hello hello" | sed 's/hello/hi/g' # hi hi # Replace in file (print to stdout) sed 's/foo/bar/g' file.txt # Replace in place (-i) sed -i 's/foo/bar/g' file.txt # Backup before in-place edit sed -i.bak 's/foo/bar/g' file.txt The -i flag is powerful and dangerous. Always test without it first. ...

February 27, 2026 Â· 5 min Â· 911 words Â· Rob Washington

awk: When grep and cut Aren't Enough

You can grep for lines and cut for columns. But what about “show me the third column of lines containing ERROR, but only if the second column is greater than 100”? That’s awk territory. The Basics awk processes text line by line, splitting each into fields: 1 2 3 4 5 6 7 8 9 # Print second column (space-delimited by default) echo "hello world" | awk '{print $2}' # world # Print first and third columns cat data.txt | awk '{print $1, $3}' # Print entire line awk '{print $0}' file.txt $1, $2, etc. are fields. $0 is the whole line. NF is the number of fields. NR is the line number. ...

February 27, 2026 Â· 6 min Â· 1125 words Â· Rob Washington

find: The Swiss Army Knife You're Underusing

Every developer knows find . -name "*.txt". Few know that find can replace half your shell scripts. Beyond Basic Search 1 2 3 4 5 6 7 8 9 10 11 # Find by name (case-insensitive) find . -iname "readme*" # Find by extension find . -name "*.py" # Find by exact name find . -name "Makefile" # Find excluding directories find . -name "*.js" -not -path "./node_modules/*" The -not (or !) operator is your friend for excluding noise. ...

February 27, 2026 Â· 6 min Â· 1166 words Â· Rob Washington

xargs: Turn Any Output Into Parallel Commands

You have a list of files. You need to process each one. The naive approach: 1 2 3 for file in $(cat files.txt); do process "$file" done This works until it doesn’t — filenames with spaces break it, and it’s sequential. Enter xargs. The Basics xargs reads input and converts it into arguments for a command: 1 2 3 4 5 # Delete files listed in a file cat files.txt | xargs rm # Same thing, more efficient xargs rm < files.txt Without xargs, you’d need a loop. With xargs, one line. ...

February 27, 2026 Â· 5 min Â· 1033 words Â· Rob Washington

Linux Signals: Graceful Shutdowns and Process Control

Your application is running in production. You need to restart it for a config change. Do you: A) kill -9 and hope for the best B) Send a signal it can handle gracefully If you picked A, you’ve probably lost data. Let’s fix that. The Essential Signals Signal Number Default Action Use Case SIGTERM 15 Terminate Graceful shutdown request SIGINT 2 Terminate Ctrl+C, interactive stop SIGHUP 1 Terminate Config reload (by convention) SIGKILL 9 Terminate Force kill (cannot be caught) SIGUSR1/2 10/12 Terminate Application-defined SIGCHLD 17 Ignore Child process state change SIGTERM is the polite ask. “Please shut down when convenient.” SIGKILL is the eviction notice. No cleanup, no saving state, immediate death. ...

February 27, 2026 Â· 5 min Â· 985 words Â· Rob Washington

Cron Jobs That Don't Wake You Up at Night

Cron is deceptively simple. Five fields, a command, done. Until your job runs twice simultaneously, silently fails for a week, or fills your disk with output nobody reads. Here’s how to write cron jobs that actually work in production. The Basics Done Right 1 2 3 4 5 6 7 8 # Bad: No logging, no error handling 0 * * * * /opt/scripts/backup.sh # Better: Redirect output, capture errors 0 * * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 # Best: Timestamped logging with chronic 0 * * * * chronic /opt/scripts/backup.sh chronic (from moreutils) only outputs when the command fails. Perfect for cron — silent success, loud failure. ...

February 27, 2026 Â· 5 min Â· 896 words Â· Rob Washington

Mastering systemd Service Units: From First Service to Production-Ready

If you’re running services on Linux, you’re almost certainly using systemd. But there’s a gap between knowing systemctl start nginx and actually writing your own robust service units. Let’s close that gap. The Anatomy of a Service Unit A systemd service unit lives in /etc/systemd/system/ and has three main sections: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [Unit] Description=My Application Service After=network.target Wants=network-online.target [Service] Type=simple User=appuser Group=appgroup WorkingDirectory=/opt/myapp ExecStart=/opt/myapp/bin/server Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target Let’s break down what matters: ...

February 27, 2026 Â· 4 min Â· 715 words Â· Rob Washington

Bash Scripting Patterns That Prevent Disasters

Bash scripts have a reputation for being fragile. They don’t have to be. Here are the patterns that separate scripts that work from scripts that work reliably. Start Every Script Right 1 2 3 #!/usr/bin/env bash set -euo pipefail IFS=$'\n\t' What each does: set -e - Exit on any command failure set -u - Error on undefined variables set -o pipefail - Pipelines fail if any command fails IFS=$'\n\t' - Safer word splitting (no space splitting) Error Handling Basic Trap 1 2 3 4 5 6 7 8 9 10 11 12 #!/usr/bin/env bash set -euo pipefail cleanup() { echo "Cleaning up..." rm -f "$TEMP_FILE" } trap cleanup EXIT TEMP_FILE=$(mktemp) # Script continues... # cleanup runs automatically on exit, error, or interrupt Detailed Error Reporting 1 2 3 4 5 6 7 8 9 10 11 12 #!/usr/bin/env bash set -euo pipefail error_handler() { local line=$1 local exit_code=$2 echo "Error on line $line: exit code $exit_code" >&2 exit "$exit_code" } trap 'error_handler $LINENO $?' ERR # Now errors report their line number Log and Exit 1 2 3 4 5 6 7 die() { echo "ERROR: $*" >&2 exit 1 } # Usage [[ -f "$CONFIG_FILE" ]] || die "Config file not found: $CONFIG_FILE" Argument Parsing Simple Positional 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 #!/usr/bin/env bash set -euo pipefail usage() { echo "Usage: $0 <environment> <version>" echo " environment: staging|production" echo " version: semver (e.g., 1.2.3)" exit 1 } [[ $# -eq 2 ]] || usage ENVIRONMENT=$1 VERSION=$2 [[ "$ENVIRONMENT" =~ ^(staging|production)$ ]] || die "Invalid environment" [[ "$VERSION" =~ ^[0-9]+\.[0-9]+\.[0-9]+$ ]] || die "Invalid version format" Flags with getopts 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 #!/usr/bin/env bash set -euo pipefail VERBOSE=false DRY_RUN=false OUTPUT="" usage() { cat <<EOF Usage: $0 [options] <file> Options: -v Verbose output -n Dry run (don't make changes) -o FILE Output file -h Show this help EOF exit 1 } while getopts "vno:h" opt; do case $opt in v) VERBOSE=true ;; n) DRY_RUN=true ;; o) OUTPUT=$OPTARG ;; h) usage ;; *) usage ;; esac done shift $((OPTIND - 1)) [[ $# -eq 1 ]] || usage FILE=$1 Long Options (Manual Parsing) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 #!/usr/bin/env bash set -euo pipefail VERBOSE=false CONFIG="" while [[ $# -gt 0 ]]; do case $1 in -v|--verbose) VERBOSE=true shift ;; -c|--config) CONFIG=$2 shift 2 ;; -h|--help) usage ;; --) shift break ;; -*) die "Unknown option: $1" ;; *) break ;; esac done Variable Safety Default Values 1 2 3 4 5 6 7 8 # Default if unset NAME=${NAME:-"default"} # Default if unset or empty NAME=${NAME:="default"} # Error if unset : "${REQUIRED_VAR:?'REQUIRED_VAR must be set'}" Safe Variable Expansion 1 2 3 4 5 6 7 8 # Always quote variables rm "$FILE" # Good rm $FILE # Bad - breaks on spaces # Check before using if [[ -n "${VAR:-}" ]]; then echo "$VAR" fi Arrays 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 FILES=("file1.txt" "file with spaces.txt" "file3.txt") # Iterate safely for file in "${FILES[@]}"; do echo "Processing: $file" done # Pass to commands cp "${FILES[@]}" /destination/ # Length echo "Count: ${#FILES[@]}" # Append FILES+=("another.txt") File Operations Safe Temporary Files 1 2 3 4 5 TEMP_DIR=$(mktemp -d) trap 'rm -rf "$TEMP_DIR"' EXIT TEMP_FILE=$(mktemp) trap 'rm -f "$TEMP_FILE"' EXIT Check Before Acting 1 2 3 4 5 6 7 8 9 10 11 12 13 14 # File exists [[ -f "$FILE" ]] || die "File not found: $FILE" # Directory exists [[ -d "$DIR" ]] || die "Directory not found: $DIR" # File is readable [[ -r "$FILE" ]] || die "Cannot read: $FILE" # File is writable [[ -w "$FILE" ]] || die "Cannot write: $FILE" # File is executable [[ -x "$FILE" ]] || die "Cannot execute: $FILE" Safe File Writing 1 2 3 4 5 6 7 8 9 10 # Atomic write with temp file write_config() { local content=$1 local dest=$2 local temp temp=$(mktemp) echo "$content" > "$temp" mv "$temp" "$dest" # Atomic on same filesystem } Command Execution Check Command Exists 1 2 3 4 5 6 7 require_command() { command -v "$1" >/dev/null 2>&1 || die "Required command not found: $1" } require_command jq require_command aws require_command docker Capture Output and Exit Code 1 2 3 4 5 6 7 8 9 10 11 # Capture output output=$(some_command 2>&1) # Capture exit code without exiting (despite set -e) exit_code=0 output=$(some_command 2>&1) || exit_code=$? if [[ $exit_code -ne 0 ]]; then echo "Command failed with code $exit_code" echo "Output: $output" fi Retry Logic 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 retry() { local max_attempts=$1 local delay=$2 shift 2 local cmd=("$@") local attempt=1 while [[ $attempt -le $max_attempts ]]; do if "${cmd[@]}"; then return 0 fi echo "Attempt $attempt/$max_attempts failed. Retrying in ${delay}s..." sleep "$delay" ((attempt++)) done return 1 } # Usage retry 3 5 curl -f https://api.example.com/health Logging 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 LOG_FILE="/var/log/myscript.log" log() { local level=$1 shift echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $*" | tee -a "$LOG_FILE" } info() { log INFO "$@"; } warn() { log WARN "$@"; } error() { log ERROR "$@" >&2; } # Usage info "Starting deployment" warn "Deprecated config option used" error "Failed to connect to database" Confirmation Prompts 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 confirm() { local prompt=${1:-"Continue?"} local response read -r -p "$prompt [y/N] " response [[ "$response" =~ ^[Yy]$ ]] } # Usage if confirm "Delete all files in $DIR?"; then rm -rf "$DIR"/* fi # With default yes confirm_yes() { local prompt=${1:-"Continue?"} local response read -r -p "$prompt [Y/n] " response [[ ! "$response" =~ ^[Nn]$ ]] } Parallel Execution 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # Simple background jobs for server in server1 server2 server3; do deploy_to "$server" & done wait # Wait for all background jobs # With job limiting MAX_JOBS=4 job_count=0 for item in "${ITEMS[@]}"; do process_item "$item" & ((job_count++)) if [[ $job_count -ge $MAX_JOBS ]]; then wait -n # Wait for any one job to complete ((job_count--)) fi done wait # Wait for remaining jobs Complete Script Template 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 #!/usr/bin/env bash # # Description: What this script does # Usage: ./script.sh [options] <args> # set -euo pipefail IFS=$'\n\t' # Constants readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" readonly SCRIPT_NAME="$(basename "${BASH_SOURCE[0]}")" # Defaults VERBOSE=false DRY_RUN=false # Logging log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*"; } info() { log "INFO: $*"; } warn() { log "WARN: $*" >&2; } error() { log "ERROR: $*" >&2; } die() { error "$@"; exit 1; } # Cleanup cleanup() { # Add cleanup tasks here : } trap cleanup EXIT # Usage usage() { cat <<EOF Usage: $SCRIPT_NAME [options] <argument> Description: What this script does in more detail. Options: -v, --verbose Enable verbose output -n, --dry-run Show what would be done -h, --help Show this help message Examples: $SCRIPT_NAME -v input.txt $SCRIPT_NAME --dry-run config.yml EOF exit "${1:-0}" } # Parse arguments parse_args() { while [[ $# -gt 0 ]]; do case $1 in -v|--verbose) VERBOSE=true; shift ;; -n|--dry-run) DRY_RUN=true; shift ;; -h|--help) usage 0 ;; --) shift; break ;; -*) die "Unknown option: $1" ;; *) break ;; esac done [[ $# -ge 1 ]] || die "Missing required argument" ARGUMENT=$1 } # Main logic main() { parse_args "$@" info "Starting with argument: $ARGUMENT" if $VERBOSE; then info "Verbose mode enabled" fi if $DRY_RUN; then info "Dry run - no changes will be made" fi # Your script logic here info "Done" } main "$@" Bash scripts don’t need to be fragile. set -euo pipefail catches most accidents. Proper argument parsing makes scripts usable. Traps ensure cleanup happens. These patterns transform one-off hacks into reliable automation. Use the template, adapt as needed, and stop being afraid of your own scripts. ...

February 26, 2026 Â· 7 min Â· 1491 words Â· Rob Washington

Writing Systemd Service Files That Actually Work

Systemd services look simple until they don’t start, restart unexpectedly, or fail silently. Here’s how to write service files that work reliably in production. Basic Structure 1 2 3 4 5 6 7 8 9 10 11 12 # /etc/systemd/system/myapp.service [Unit] Description=My Application After=network.target [Service] Type=simple ExecStart=/usr/bin/myapp Restart=always [Install] WantedBy=multi-user.target Three sections, three purposes: [Unit] - What is this, what does it depend on [Service] - How to run it [Install] - When to start it Service Types Type=simple (Default) 1 2 3 [Service] Type=simple ExecStart=/usr/bin/myapp Systemd considers the service started immediately when ExecStart runs. Use when your process stays in foreground. ...

February 26, 2026 Â· 6 min Â· 1098 words Â· Rob Washington

Linux Process Management: From ps to Process Trees

Understanding processes is fundamental to Linux troubleshooting. These tools and techniques will help you find what’s running, what’s stuck, and what needs to die. Viewing Processes ps - Process Snapshot 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # All processes (BSD style) ps aux # All processes (Unix style) ps -ef # Process tree ps auxf # Specific columns ps -eo pid,ppid,user,%cpu,%mem,stat,cmd # Find specific process ps aux | grep nginx # By exact name (no grep needed) ps -C nginx # By user ps -u www-data Understanding ps Output U r w S o w E o w R t - d a t a 1 P 2 I 3 D 1 4 % C 0 2 P . . U 0 5 % M 0 1 E . . M 1 2 1 4 6 5 9 6 V 9 7 S 3 8 Z 6 9 1 9 3 8 R 2 7 S 5 6 S 6 5 T ? ? T Y S S S T s l A T S F 1 T e 0 A b : R 2 0 T 4 0 T 0 5 I : : M 0 2 E 3 3 C / n O s g M b i M i n A n x N / : D i n w i o t r k e r PID: Process ID %CPU: CPU usage %MEM: Memory usage VSZ: Virtual memory size RSS: Resident set size (actual RAM) STAT: Process state TIME: CPU time consumed Process States (STAT) R S D Z T N s l R S S Z S H L S M F u l l o t i o e u o n e e m o g w s l r n e e b p h s t e i p p i p p i i g n i i e e p r o - r g n n d r i n t o g g i o h u o r l r n ( ( r i e e d i u i t a a n n t y d d p t i y e e r e n r d o r t c r e e u r s p r s t u i p b t l i e b ) l e , u s u a l l y I / O ) top - Real-time View 1 2 3 4 5 6 7 8 9 10 11 # Basic top # Sort by memory top -o %MEM # Specific user top -u www-data # Batch mode (for scripts) top -b -n 1 Inside top: ...

February 25, 2026 Â· 9 min Â· 1733 words Â· Rob Washington