The twelve-factor methodology emerged from Heroku’s experience running millions of apps. These principles create applications that deploy cleanly, scale effortlessly, and minimize divergence between development and production. Let’s walk through each factor with practical examples.
1. Codebase: One Repo, Many Deploys#
One codebase tracked in version control, many deploys (dev, staging, prod).
1
2
3
4
5
6
7
8
9
| # Good: Single repo, branch-based environments
main → production
staging → staging
feature/* → development
# Bad: Separate repos for each environment
myapp-dev/
myapp-staging/
myapp-prod/
|
1
2
3
4
5
| # config.py - Same code, different configs
import os
ENVIRONMENT = os.getenv("ENVIRONMENT", "development")
DATABASE_URL = os.getenv("DATABASE_URL")
|
2. Dependencies: Explicitly Declare and Isolate#
Never rely on system-wide packages. Declare everything.
1
2
3
4
5
6
7
8
9
10
11
| # requirements.txt - Pin exact versions
flask==3.0.0
sqlalchemy==2.0.25
redis==5.0.1
# Or use poetry
# pyproject.toml
[tool.poetry.dependencies]
python = "^3.11"
flask = "3.0.0"
sqlalchemy = "2.0.25"
|
1
2
3
4
5
6
7
8
9
| # Dockerfile - Isolated environment
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["gunicorn", "app:app"]
|
3. Config: Store in the Environment#
Configuration varies between deploys. Code doesn’t.
1
2
3
4
5
6
7
8
9
10
| # Bad: Hardcoded config
DATABASE_URL = "postgres://localhost/myapp"
API_KEY = "sk-secret123"
# Good: Environment variables
import os
DATABASE_URL = os.environ["DATABASE_URL"]
API_KEY = os.environ["API_KEY"]
REDIS_URL = os.getenv("REDIS_URL", "redis://localhost:6379")
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| # Kubernetes ConfigMap + Secret
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
LOG_LEVEL: "info"
CACHE_TTL: "3600"
---
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
stringData:
DATABASE_URL: "postgres://user:pass@db/app"
|
4. Backing Services: Treat as Attached Resources#
Databases, caches, queues — all are attached resources swappable via config.
1
2
3
4
5
6
7
8
9
10
| # The app doesn't care if Redis is local or AWS ElastiCache
import os
import redis
redis_client = redis.from_url(os.environ["REDIS_URL"])
# Swap backing service by changing URL:
# Local: redis://localhost:6379
# AWS: redis://my-cluster.cache.amazonaws.com:6379
# Provider: redis://user:pass@redis-provider.com:6379
|
1
2
3
4
5
6
7
8
| # Database abstraction
from sqlalchemy import create_engine
# Works with any SQL database
engine = create_engine(os.environ["DATABASE_URL"])
# SQLite for dev: sqlite:///app.db
# Postgres for prod: postgresql://user:pass@host/db
|
5. Build, Release, Run: Strict Separation#
Build → immutable artifact. Release → artifact + config. Run → execute.
1
2
3
4
5
6
7
8
9
10
11
12
13
| # Build stage (CI)
docker build -t myapp:abc123 .
docker push registry.example.com/myapp:abc123
# Release stage (CD)
# Combine image with environment-specific config
helm upgrade myapp ./chart \
--set image.tag=abc123 \
--values values-production.yaml
# Run stage
# Kubernetes runs the release
kubectl get pods
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| # GitOps release manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
template:
spec:
containers:
- name: app
image: registry.example.com/myapp:abc123 # Immutable
envFrom:
- configMapRef:
name: app-config # Release-specific
- secretRef:
name: app-secrets
|
6. Processes: Stateless and Share-Nothing#
Processes are stateless. Any persistent data lives in backing services.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| # Bad: In-memory session storage
sessions = {}
@app.route("/login")
def login():
sessions[user_id] = session_data # Lost on restart!
# Good: External session store
import redis
redis_client = redis.from_url(os.environ["REDIS_URL"])
@app.route("/login")
def login():
redis_client.setex(f"session:{user_id}", 3600, session_data)
|
1
2
3
4
5
6
7
8
9
10
11
12
13
| # Bad: Local file uploads
@app.route("/upload")
def upload():
file.save(f"/app/uploads/{filename}") # Gone when container dies
# Good: Object storage
import boto3
s3 = boto3.client("s3")
@app.route("/upload")
def upload():
s3.upload_fileobj(file, "my-bucket", filename)
|
7. Port Binding: Export Services via Port#
The app is self-contained and exports HTTP via port binding.
1
2
3
4
5
6
7
8
9
| # app.py
from flask import Flask
import os
app = Flask(__name__)
if __name__ == "__main__":
port = int(os.getenv("PORT", 8080))
app.run(host="0.0.0.0", port=port)
|
1
2
3
4
5
6
| FROM python:3.11-slim
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
EXPOSE 8080
CMD ["gunicorn", "--bind", "0.0.0.0:8080", "app:app"]
|
8. Concurrency: Scale Out via Process Model#
Scale by adding processes, not threads within a single process.
1
2
3
4
5
6
7
8
9
10
11
12
13
| # Scale horizontally
apiVersion: apps/v1
kind: Deployment
spec:
replicas: 10 # 10 identical processes
template:
spec:
containers:
- name: web
resources:
requests:
cpu: "100m"
memory: "128Mi"
|
1
2
3
4
5
6
7
8
9
| # Different process types
# web: handles HTTP
# worker: processes background jobs
# scheduler: runs cron tasks
# Procfile
web: gunicorn app:app
worker: celery -A tasks worker
scheduler: celery -A tasks beat
|
9. Disposability: Fast Startup and Graceful Shutdown#
Processes start fast and shut down gracefully.
1
2
3
4
5
6
7
8
9
10
11
12
| import signal
import sys
def graceful_shutdown(signum, frame):
print("Shutting down gracefully...")
# Finish current requests
# Close database connections
# Flush logs
sys.exit(0)
signal.signal(signal.SIGTERM, graceful_shutdown)
signal.signal(signal.SIGINT, graceful_shutdown)
|
1
2
3
4
5
6
7
8
| # Kubernetes lifecycle hooks
containers:
- name: app
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 10"] # Drain connections
terminationGracePeriodSeconds: 30
|
10. Dev/Prod Parity: Keep Environments Similar#
Minimize gaps between development and production.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| # docker-compose.yml - Dev mirrors prod
services:
app:
build: .
environment:
- DATABASE_URL=postgres://postgres:postgres@db/app
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
db:
image: postgres:15 # Same version as prod
redis:
image: redis:7 # Same version as prod
|
1
2
3
4
5
6
| # Same code path for all environments
# Only config differs
if os.getenv("ENVIRONMENT") == "development":
# Maybe enable debug mode
app.debug = True
# But never different business logic per environment
|
11. Logs: Treat as Event Streams#
Write logs to stdout. Let the platform handle routing.
1
2
3
4
5
6
7
8
9
10
11
12
| import logging
import sys
# Configure to write to stdout
logging.basicConfig(
stream=sys.stdout,
level=logging.INFO,
format='{"time": "%(asctime)s", "level": "%(levelname)s", "message": "%(message)s"}'
)
logger = logging.getLogger(__name__)
logger.info("Request processed", extra={"user_id": 123, "path": "/api"})
|
1
2
3
4
| # Platform collects and routes logs
# Kubernetes + Fluentd → Elasticsearch
# Docker → CloudWatch
# Heroku → Papertrail
|
12. Admin Processes: Run as One-Off Tasks#
Migrations, scripts, REPLs — run as one-off processes in identical environments.
1
2
3
4
5
6
7
8
9
10
| # Run migration as one-off process
kubectl run migration \
--image=myapp:abc123 \
--rm -it \
--restart=Never \
--env-from=configmap/app-config \
-- python manage.py migrate
# One-off console
kubectl exec -it deploy/myapp -- python manage.py shell
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| # Kubernetes Job for admin tasks
apiVersion: batch/v1
kind: Job
metadata:
name: db-migrate
spec:
template:
spec:
containers:
- name: migrate
image: myapp:abc123
command: ["python", "manage.py", "migrate"]
envFrom:
- configMapRef:
name: app-config
restartPolicy: Never
|
Quick Reference#
| Factor | Summary |
|---|
| Codebase | One repo, many deploys |
| Dependencies | Declare explicitly, isolate |
| Config | Environment variables |
| Backing Services | Attached via URL |
| Build/Release/Run | Strict separation |
| Processes | Stateless, share-nothing |
| Port Binding | Self-contained HTTP export |
| Concurrency | Scale via processes |
| Disposability | Fast start, graceful stop |
| Dev/Prod Parity | Keep environments identical |
| Logs | Stream to stdout |
| Admin Processes | One-off tasks in same environment |
These factors aren’t rules — they’re patterns that emerged from running apps at scale. Apply them and your apps will deploy anywhere, scale effortlessly, and cause fewer 3 AM pages.