LLM API Integration Patterns for Production Applications

Integrating LLMs into production applications is deceptively simple. Call an API, get text back. But building reliable, cost-effective systems requires more thought. Here are patterns that work at scale. The Basic Call Every LLM integration starts here: 1 2 3 4 5 6 7 8 import openai def complete(prompt: str) -> str: response = openai.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": prompt}] ) return response.choices[0].message.content This works for prototypes. Production needs more. Retry with Exponential Backoff LLM APIs have rate limits and occasional failures: ...

March 1, 2026 Â· 5 min Â· 1002 words Â· Rob Washington

Python Patterns for Command-Line Scripts

Python is the go-to language for automation scripts. Here’s how to write CLI tools that are reliable and user-friendly. Basic Script Structure 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 #!/usr/bin/env python3 """One-line description of what this script does.""" import argparse import sys def main(): parser = argparse.ArgumentParser(description=__doc__) parser.add_argument('input', help='Input file path') parser.add_argument('-o', '--output', help='Output file path') parser.add_argument('-v', '--verbose', action='store_true') args = parser.parse_args() # Your logic here process(args.input, args.output, args.verbose) if __name__ == '__main__': main() Argument Parsing with argparse Positional Arguments 1 2 3 4 parser.add_argument('filename') # Required parser.add_argument('files', nargs='+') # One or more parser.add_argument('files', nargs='*') # Zero or more parser.add_argument('config', nargs='?') # Optional positional Optional Arguments 1 2 3 4 5 parser.add_argument('-v', '--verbose', action='store_true') parser.add_argument('-q', '--quiet', action='store_false', dest='verbose') parser.add_argument('-n', '--count', type=int, default=10) parser.add_argument('-f', '--format', choices=['json', 'csv', 'table']) parser.add_argument('--config', type=argparse.FileType('r')) Subcommands 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 parser = argparse.ArgumentParser() subparsers = parser.add_subparsers(dest='command', required=True) # 'init' command init_parser = subparsers.add_parser('init', help='Initialize project') init_parser.add_argument('--force', action='store_true') # 'run' command run_parser = subparsers.add_parser('run', help='Run the application') run_parser.add_argument('--port', type=int, default=8080) args = parser.parse_args() if args.command == 'init': do_init(args.force) elif args.command == 'run': do_run(args.port) Error Handling 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 import sys def main(): try: result = process() return 0 except FileNotFoundError as e: print(f"Error: File not found: {e.filename}", file=sys.stderr) return 1 except PermissionError: print("Error: Permission denied", file=sys.stderr) return 1 except KeyboardInterrupt: print("\nInterrupted", file=sys.stderr) return 130 except Exception as e: print(f"Error: {e}", file=sys.stderr) return 1 if __name__ == '__main__': sys.exit(main()) Logging 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 import logging def setup_logging(verbose=False): level = logging.DEBUG if verbose else logging.INFO logging.basicConfig( level=level, format='%(asctime)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S' ) def main(): args = parse_args() setup_logging(args.verbose) logging.info("Starting process") logging.debug("Detailed info here") logging.warning("Something might be wrong") logging.error("Something went wrong") Log to File and Console 1 2 3 4 5 6 7 8 9 10 11 def setup_logging(verbose=False, log_file=None): handlers = [logging.StreamHandler()] if log_file: handlers.append(logging.FileHandler(log_file)) logging.basicConfig( level=logging.DEBUG if verbose else logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', handlers=handlers ) Progress Indicators Simple Progress 1 2 3 4 5 6 7 8 import sys def process_items(items): total = len(items) for i, item in enumerate(items, 1): process(item) print(f"\rProcessing: {i}/{total}", end='', flush=True) print() # Newline at end With tqdm 1 2 3 4 5 6 7 8 9 10 from tqdm import tqdm for item in tqdm(items, desc="Processing"): process(item) # Or wrap any iterable with tqdm(total=100) as pbar: for i in range(100): do_work() pbar.update(1) Reading Input From File or Stdin 1 2 3 4 5 6 7 8 9 10 import sys def read_input(filepath=None): if filepath: with open(filepath) as f: return f.read() elif not sys.stdin.isatty(): return sys.stdin.read() else: raise ValueError("No input provided") Line by Line 1 2 3 4 5 import fileinput # Reads from files in args or stdin for line in fileinput.input(): process(line.strip()) Output Formatting JSON Output 1 2 3 4 5 6 7 import json def output_json(data, pretty=False): if pretty: print(json.dumps(data, indent=2, default=str)) else: print(json.dumps(data, default=str)) Table Output 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 def print_table(headers, rows): # Calculate column widths widths = [len(h) for h in headers] for row in rows: for i, cell in enumerate(row): widths[i] = max(widths[i], len(str(cell))) # Print header header_line = ' | '.join(h.ljust(widths[i]) for i, h in enumerate(headers)) print(header_line) print('-' * len(header_line)) # Print rows for row in rows: print(' | '.join(str(cell).ljust(widths[i]) for i, cell in enumerate(row))) With tabulate 1 2 3 4 5 6 7 from tabulate import tabulate data = [ ['Alice', 30, 'Engineer'], ['Bob', 25, 'Designer'], ] print(tabulate(data, headers=['Name', 'Age', 'Role'], tablefmt='grid')) Configuration Files YAML Config 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import yaml from pathlib import Path def load_config(config_path=None): paths = [ config_path, Path.home() / '.myapp.yaml', Path('/etc/myapp/config.yaml'), ] for path in paths: if path and Path(path).exists(): with open(path) as f: return yaml.safe_load(f) return {} # Defaults Environment Variables 1 2 3 4 5 6 7 8 import os def get_config(): return { 'api_key': os.environ.get('API_KEY'), 'debug': os.environ.get('DEBUG', '').lower() in ('true', '1', 'yes'), 'timeout': int(os.environ.get('TIMEOUT', '30')), } Running External Commands 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import subprocess def run_command(cmd, check=True): """Run command and return output.""" result = subprocess.run( cmd, shell=isinstance(cmd, str), capture_output=True, text=True, check=check ) return result.stdout.strip() # Usage output = run_command(['git', 'status', '--short']) output = run_command('ls -la | head -5') With Timeout 1 2 3 4 5 6 7 8 9 try: result = subprocess.run( ['slow-command'], timeout=30, capture_output=True, text=True ) except subprocess.TimeoutExpired: print("Command timed out") Temporary Files 1 2 3 4 5 6 7 8 9 10 11 12 13 import tempfile from pathlib import Path # Temporary file with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f: f.write('{"data": "value"}') temp_path = f.name # Temporary directory with tempfile.TemporaryDirectory() as tmpdir: work_file = Path(tmpdir) / 'work.txt' work_file.write_text('working...') # Directory deleted when context exits Path Handling 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 from pathlib import Path def process_files(directory): base = Path(directory) # Find files for path in base.glob('**/*.py'): print(f"Processing: {path}") # Path operations print(f" Name: {path.name}") print(f" Stem: {path.stem}") print(f" Suffix: {path.suffix}") print(f" Parent: {path.parent}") # Read/write content = path.read_text() path.with_suffix('.bak').write_text(content) Complete Example 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 #!/usr/bin/env python3 """Process log files and output statistics.""" import argparse import json import logging import sys from collections import Counter from pathlib import Path def setup_logging(verbose): logging.basicConfig( level=logging.DEBUG if verbose else logging.INFO, format='%(levelname)s: %(message)s' ) def parse_args(): parser = argparse.ArgumentParser( description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter ) parser.add_argument( 'logfiles', nargs='+', type=Path, help='Log files to process' ) parser.add_argument( '-o', '--output', type=argparse.FileType('w'), default=sys.stdout, help='Output file (default: stdout)' ) parser.add_argument( '-f', '--format', choices=['json', 'text'], default='text', help='Output format' ) parser.add_argument( '-v', '--verbose', action='store_true', help='Enable verbose output' ) return parser.parse_args() def analyze_logs(logfiles): stats = Counter() for logfile in logfiles: logging.info(f"Processing {logfile}") if not logfile.exists(): logging.warning(f"File not found: {logfile}") continue for line in logfile.read_text().splitlines(): if 'ERROR' in line: stats['errors'] += 1 elif 'WARNING' in line: stats['warnings'] += 1 stats['total'] += 1 return dict(stats) def output_results(stats, output, fmt): if fmt == 'json': json.dump(stats, output, indent=2) output.write('\n') else: for key, value in stats.items(): output.write(f"{key}: {value}\n") def main(): args = parse_args() setup_logging(args.verbose) try: stats = analyze_logs(args.logfiles) output_results(stats, args.output, args.format) return 0 except Exception as e: logging.error(f"Failed: {e}") return 1 if __name__ == '__main__': sys.exit(main()) Usage: ...

February 28, 2026 Â· 6 min Â· 1202 words Â· Rob Washington

Getting Structured Data from LLMs: JSON Mode and Beyond

The biggest challenge with LLMs in production isn’t getting good responses—it’s getting parseable responses. When you need JSON for your pipeline, “Here’s the data you requested:” followed by markdown-wrapped output breaks everything. Here’s how to reliably extract structured data. The Problem 1 2 3 4 5 6 7 8 response = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": "Extract the person's name and age from: 'John Smith is 34 years old'"}] ) print(response.choices[0].message.content) # "The person's name is John Smith and their age is 34." # ... not what we needed You wanted {"name": "John Smith", "age": 34}. You got prose. ...

February 26, 2026 Â· 6 min Â· 1074 words Â· Rob Washington

Python Asyncio Patterns: Concurrency Without the Headaches

Asyncio enables concurrent I/O without threads. These patterns help you use it effectively without falling into common traps. Basic Structure 1 2 3 4 5 6 7 8 9 import asyncio async def main(): print("Hello") await asyncio.sleep(1) print("World") # Python 3.7+ asyncio.run(main()) HTTP Requests with aiohttp 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 import aiohttp import asyncio async def fetch(session, url): async with session.get(url) as response: return await response.text() async def fetch_all(urls): async with aiohttp.ClientSession() as session: tasks = [fetch(session, url) for url in urls] return await asyncio.gather(*tasks) # Usage urls = [ "https://api.example.com/users", "https://api.example.com/posts", "https://api.example.com/comments", ] results = asyncio.run(fetch_all(urls)) Task Management Running Tasks Concurrently 1 2 3 4 5 6 7 8 9 10 11 12 13 14 async def task_a(): await asyncio.sleep(2) return "A done" async def task_b(): await asyncio.sleep(1) return "B done" async def main(): # Run concurrently, wait for all results = await asyncio.gather(task_a(), task_b()) print(results) # ['A done', 'B done'] - takes ~2s total, not 3s asyncio.run(main()) Handle Exceptions in gather 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 async def might_fail(n): if n == 2: raise ValueError("Task 2 failed") await asyncio.sleep(n) return f"Task {n} done" async def main(): # return_exceptions=True prevents one failure from canceling others results = await asyncio.gather( might_fail(1), might_fail(2), might_fail(3), return_exceptions=True ) for result in results: if isinstance(result, Exception): print(f"Error: {result}") else: print(result) asyncio.run(main()) First Completed 1 2 3 4 5 6 7 8 9 10 11 12 13 14 async def main(): tasks = [ asyncio.create_task(fetch(session, url1)), asyncio.create_task(fetch(session, url2)), ] # Return when first completes done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED) # Cancel remaining for task in pending: task.cancel() return done.pop().result() Timeout 1 2 3 4 5 6 7 8 9 10 11 async def slow_operation(): await asyncio.sleep(10) return "done" async def main(): try: result = await asyncio.wait_for(slow_operation(), timeout=5.0) except asyncio.TimeoutError: print("Operation timed out") asyncio.run(main()) Semaphores (Limiting Concurrency) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 async def fetch_with_limit(session, url, semaphore): async with semaphore: async with session.get(url) as response: return await response.text() async def main(): semaphore = asyncio.Semaphore(10) # Max 10 concurrent requests async with aiohttp.ClientSession() as session: tasks = [ fetch_with_limit(session, url, semaphore) for url in urls ] results = await asyncio.gather(*tasks) Queues for Producer/Consumer 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 async def producer(queue, items): for item in items: await queue.put(item) print(f"Produced: {item}") # Signal completion await queue.put(None) async def consumer(queue, name): while True: item = await queue.get() if item is None: queue.task_done() break print(f"{name} processing: {item}") await asyncio.sleep(1) # Simulate work queue.task_done() async def main(): queue = asyncio.Queue(maxsize=10) # Start producer and multiple consumers await asyncio.gather( producer(queue, range(20)), consumer(queue, "Worker-1"), consumer(queue, "Worker-2"), ) asyncio.run(main()) Error Handling Patterns Task Exception Handling 1 2 3 4 5 6 7 8 9 10 11 12 13 async def risky_task(): await asyncio.sleep(1) raise ValueError("Something went wrong") async def main(): task = asyncio.create_task(risky_task()) try: await task except ValueError as e: print(f"Caught: {e}") asyncio.run(main()) Background Task Exceptions 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 def handle_exception(loop, context): msg = context.get("exception", context["message"]) print(f"Caught exception: {msg}") async def background_task(): await asyncio.sleep(1) raise RuntimeError("Background failure") async def main(): loop = asyncio.get_event_loop() loop.set_exception_handler(handle_exception) # Fire and forget - exception won't crash main asyncio.create_task(background_task()) await asyncio.sleep(5) asyncio.run(main()) Context Managers 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import asyncio from contextlib import asynccontextmanager @asynccontextmanager async def managed_resource(): print("Acquiring resource") resource = await create_resource() try: yield resource finally: print("Releasing resource") await resource.close() async def main(): async with managed_resource() as resource: await resource.do_something() Running Blocking Code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 import asyncio from concurrent.futures import ThreadPoolExecutor def blocking_io(): # Simulates blocking I/O import time time.sleep(2) return "Done" async def main(): loop = asyncio.get_event_loop() # Run in thread pool result = await loop.run_in_executor(None, blocking_io) print(result) # With custom executor with ThreadPoolExecutor(max_workers=4) as executor: result = await loop.run_in_executor(executor, blocking_io) asyncio.run(main()) Periodic Tasks 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 async def periodic_task(interval, func): while True: await func() await asyncio.sleep(interval) async def heartbeat(): print("Heartbeat") async def main(): # Start periodic task in background task = asyncio.create_task(periodic_task(5, heartbeat)) # Do other work await asyncio.sleep(20) # Cancel when done task.cancel() try: await task except asyncio.CancelledError: print("Periodic task cancelled") asyncio.run(main()) Graceful Shutdown 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 import signal async def shutdown(signal, loop): print(f"Received {signal.name}") tasks = [t for t in asyncio.all_tasks() if t is not asyncio.current_task()] for task in tasks: task.cancel() await asyncio.gather(*tasks, return_exceptions=True) loop.stop() async def main(): loop = asyncio.get_event_loop() for sig in (signal.SIGTERM, signal.SIGINT): loop.add_signal_handler( sig, lambda s=sig: asyncio.create_task(shutdown(s, loop)) ) # Your long-running tasks here await asyncio.sleep(3600) asyncio.run(main()) Common Pitfalls Don’t Block the Event Loop 1 2 3 4 5 6 7 8 9 # BAD - blocks entire event loop async def bad(): time.sleep(5) # Blocking! return "done" # GOOD - use async sleep or run_in_executor async def good(): await asyncio.sleep(5) return "done" Don’t Forget to Await 1 2 3 4 5 6 7 # BAD - coroutine never runs async def main(): fetch_data() # Missing await! # GOOD async def main(): await fetch_data() Create Tasks Properly 1 2 3 4 5 6 7 8 9 # BAD - task may be garbage collected async def main(): asyncio.create_task(background_work()) # Task might not complete # GOOD - keep reference async def main(): task = asyncio.create_task(background_work()) await task # or store in set Don’t Mix Sync and Async 1 2 3 4 5 6 7 # BAD - calling async from sync incorrectly def sync_function(): result = async_function() # Returns coroutine, not result # GOOD - use asyncio.run or run_in_executor def sync_function(): result = asyncio.run(async_function()) Testing Async Code 1 2 3 4 5 6 7 8 9 10 11 12 13 import pytest import asyncio @pytest.mark.asyncio async def test_async_function(): result = await my_async_function() assert result == expected # Or with unittest class TestAsync(unittest.IsolatedAsyncioTestCase): async def test_something(self): result = await my_async_function() self.assertEqual(result, expected) Quick Reference 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 # Run async function asyncio.run(main()) # Concurrent execution await asyncio.gather(task1(), task2()) # Create background task task = asyncio.create_task(coro()) # Timeout await asyncio.wait_for(coro(), timeout=5.0) # Limit concurrency semaphore = asyncio.Semaphore(10) async with semaphore: ... # Run blocking code await loop.run_in_executor(None, blocking_func) # Sleep await asyncio.sleep(1) Asyncio shines for I/O-bound workloads—HTTP requests, database queries, file operations. It won’t help with CPU-bound work (use multiprocessing for that). ...

February 25, 2026 Â· 6 min Â· 1194 words Â· Rob Washington

LLM API Integration Patterns: Building Reliable AI-Powered Applications

Integrating LLM APIs into production applications requires more than just making API calls. These patterns address the real challenges: rate limits, token costs, latency, and reliability. Basic Client Setup 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 import os from anthropic import Anthropic client = Anthropic( api_key=os.environ.get("ANTHROPIC_API_KEY"), timeout=60.0, max_retries=3, ) def chat(message: str, system: str = None) -> str: """Simple completion with sensible defaults.""" messages = [{"role": "user", "content": message}] response = client.messages.create( model="claude-sonnet-4-20250514", max_tokens=1024, system=system or "You are a helpful assistant.", messages=messages, ) return response.content[0].text Retry with Exponential Backoff Built-in retries help, but custom logic handles edge cases: ...

February 25, 2026 Â· 7 min Â· 1291 words Â· Rob Washington

Python Virtual Environments: A Practical Guide

Every Python project should have its own virtual environment. It’s not optional — it’s how you avoid dependency hell, reproducibility issues, and the dreaded “but it works on my machine.” Why Virtual Environments? Without virtual environments: Project A needs requests==2.25 Project B needs requests==2.31 Both use system Python One project breaks With virtual environments: Each project has isolated dependencies Different Python versions per project Reproducible across machines No sudo required for installing packages The Built-in Way: venv Python 3.3+ includes venv: ...

February 24, 2026 Â· 8 min Â· 1501 words Â· Rob Washington

LLM API Integration Patterns: Building Reliable AI Features

LLM APIs are deceptively simple: send a prompt, get text back. But building reliable AI features requires handling rate limits, managing costs, structuring outputs, and gracefully degrading when things go wrong. Here are the patterns that work in production. The Basic Client Start with a wrapper that handles common concerns: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 import os import time from typing import Optional import anthropic from tenacity import retry, stop_after_attempt, wait_exponential class LLMClient: def __init__(self): self.client = anthropic.Anthropic() self.default_model = "claude-sonnet-4-20250514" self.max_tokens = 4096 @retry( stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=60) ) def complete( self, prompt: str, system: Optional[str] = None, model: Optional[str] = None, max_tokens: Optional[int] = None ) -> str: messages = [{"role": "user", "content": prompt}] response = self.client.messages.create( model=model or self.default_model, max_tokens=max_tokens or self.max_tokens, system=system or "", messages=messages ) return response.content[0].text The tenacity library handles retries with exponential backoff — essential for rate limits and transient errors. ...

February 24, 2026 Â· 6 min Â· 1104 words Â· Rob Washington

API Client Design: Building SDKs That Developers Love

A well-designed API client turns complex HTTP interactions into simple method calls. It handles authentication, retries, errors, and serialization — so users don’t have to. These patterns create clients that developers actually enjoy using. Basic Structure 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 import httpx from typing import Optional from dataclasses import dataclass @dataclass class APIConfig: base_url: str api_key: str timeout: float = 30.0 max_retries: int = 3 class APIClient: def __init__(self, config: APIConfig): self.config = config self._client = httpx.Client( base_url=config.base_url, timeout=config.timeout, headers={"Authorization": f"Bearer {config.api_key}"} ) def _request(self, method: str, path: str, **kwargs) -> dict: response = self._client.request(method, path, **kwargs) response.raise_for_status() return response.json() def close(self): self._client.close() def __enter__(self): return self def __exit__(self, *args): self.close() Resource-Based Design Organize by resource, not by HTTP method: ...

February 24, 2026 Â· 7 min Â· 1401 words Â· Rob Washington

Async Python Patterns: Concurrency Without the Confusion

Async Python lets you handle thousands of concurrent I/O operations with a single thread. No threads, no processes, no GIL headaches. But it requires thinking differently about how code executes. These patterns help you write async code that’s both correct and efficient. The Basics 1 2 3 4 5 6 7 8 9 10 11 12 13 14 import asyncio async def fetch_data(url: str) -> dict: # This is a coroutine - it can be paused and resumed async with aiohttp.ClientSession() as session: async with session.get(url) as response: return await response.json() # Running coroutines async def main(): data = await fetch_data("https://api.example.com/data") print(data) asyncio.run(main()) await pauses the coroutine until the result is ready, letting other coroutines run. ...

February 23, 2026 Â· 6 min Â· 1092 words Â· Rob Washington

LLM API Integration Patterns: Building Reliable AI-Powered Features

Adding an LLM to your application sounds simple: call the API, get a response, display it. In practice, you’re dealing with rate limits, token costs, latency spikes, and outputs that occasionally make no sense. These patterns help build LLM features that are reliable, cost-effective, and actually useful. The Basic Call Every LLM integration starts here: 1 2 3 4 5 6 7 8 9 10 11 from openai import OpenAI client = OpenAI() def ask_llm(prompt: str) -> str: response = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": prompt}], temperature=0.7 ) return response.choices[0].message.content This works for demos. Production needs more. ...

February 23, 2026 Â· 7 min Â· 1302 words Â· Rob Washington