The twelve-factor app methodology emerged from Heroku’s experience deploying thousands of applications. These principles create applications that work well in modern cloud environments — containerized, horizontally scalable, and continuously deployed.

They’re not arbitrary rules. Each factor solves a real problem.

I. Codebase

One codebase tracked in version control, many deploys.

mmmmyyyyGaBaaaopapppop.sddpppdgre:---:icppsdtlrteooavydgeuiltcnootgp:imesnnt/ta/ging,production,feature-branches

Different environments come from the same codebase. Configuration, not code, varies between deploys.

II. Dependencies

Explicitly declare and isolate dependencies.

1
2
3
4
5
6
# requirements.txt (Python)
flask==3.0.0
sqlalchemy==2.0.25
redis==5.0.1

# Never rely on system packages existing
1
2
3
4
# Dockerfile
FROM python:3.11-slim
COPY requirements.txt .
RUN pip install -r requirements.txt

No implicit dependencies. Anyone should be able to clone the repo and build.

III. Config

Store config in the environment.

1
2
3
4
5
6
7
8
# ❌ Bad: Config in code
DATABASE_URL = "postgres://prod-server/mydb"
API_KEY = "secret123"

# ✅ Good: Config from environment
import os
DATABASE_URL = os.environ["DATABASE_URL"]
API_KEY = os.environ["API_KEY"]

Config varies between deploys. Code doesn’t. Keep them separate.

Litmus test: Could you open-source the codebase without exposing credentials?

IV. Backing Services

Treat backing services as attached resources.

1
2
3
4
5
6
# Database, cache, queue — all accessed via URL
DATABASE_URL = os.environ["DATABASE_URL"]  # postgres://...
REDIS_URL = os.environ["REDIS_URL"]        # redis://...
S3_BUCKET = os.environ["S3_BUCKET"]        # my-bucket

# Swapping providers = changing config, not code

Local PostgreSQL and AWS RDS should be interchangeable from the app’s perspective.

V. Build, Release, Run

Strictly separate build and run stages.

BRRueuilnle:da:se:CAEorxdteeicfu+atcedtetp+heencdorenenflcieigaessedienepxlteohcyeuatbealnbevlierreoalnremtaeisnfetact
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# CI/CD example
build:
  - npm ci
  - npm run build
  - docker build -t myapp:$SHA .

release:
  - docker tag myapp:$SHA myapp:v1.2.3
  - kubectl set image deployment/myapp myapp=myapp:v1.2.3

run:
  - Container starts with ENV vars injected

Every release is immutable. Rollback = deploy previous release.

VI. Processes

Execute the app as one or more stateless processes.

1
2
3
4
5
# ❌ Bad: Storing state in memory
user_sessions = {}  # Lost on restart/scale

# ✅ Good: External state store
session = redis.get(f"session:{session_id}")

Processes can crash, restart, or scale. State goes in backing services, not memory.

Sticky sessions are a violation. Users shouldn’t care which instance serves them.

VII. Port Binding

Export services via port binding.

1
2
3
# App is self-contained, binds to a port
if __name__ == "__main__":
    app.run(host="0.0.0.0", port=int(os.environ.get("PORT", 8000)))

No external web server required. The app exports HTTP (or other protocol) by binding to a port.

VIII. Concurrency

Scale out via the process model.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# docker-compose.yml
services:
  web:
    image: myapp
    deploy:
      replicas: 4
  
  worker:
    image: myapp
    command: python worker.py
    deploy:
      replicas: 2

Different process types (web, worker, scheduler) scale independently. Need more web capacity? Add web processes.

IX. Disposability

Maximize robustness with fast startup and graceful shutdown.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
import signal
import sys

def graceful_shutdown(signum, frame):
    print("Shutting down gracefully...")
    # Finish current requests
    # Close database connections
    # Flush logs
    server.shutdown()
    sys.exit(0)

signal.signal(signal.SIGTERM, graceful_shutdown)

Fast startup enables rapid scaling. Graceful shutdown prevents data loss. Processes should be disposable — start fast, stop clean.

X. Dev/Prod Parity

Keep development, staging, and production as similar as possible.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# docker-compose.yml for local dev
services:
  app:
    build: .
    environment:
      - DATABASE_URL=postgres://postgres:postgres@db/myapp
  
  db:
    image: postgres:15  # Same version as production
  
  redis:
    image: redis:7      # Same version as production

Gaps to minimize:

  • Time gap: Deploy hours after writing, not weeks
  • Personnel gap: Developers who write code also deploy it
  • Tools gap: Same backing services in dev and prod

XI. Logs

Treat logs as event streams.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
import logging
import sys

# Write to stdout, not files
logging.basicConfig(
    stream=sys.stdout,
    format='%(asctime)s %(levelname)s %(message)s'
)

logger = logging.getLogger(__name__)
logger.info("Request processed", extra={"user_id": user.id})

The app doesn’t manage log files. It writes to stdout. The execution environment captures and routes logs (to files, log aggregators, etc.).

XII. Admin Processes

Run admin/management tasks as one-off processes.

1
2
3
# Run in same environment as the app
kubectl exec -it deployment/myapp -- python manage.py migrate
docker exec myapp python manage.py create_admin

Admin tasks (migrations, console, one-time scripts) run against a release, using the same codebase and config.

Practical Application

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# Dockerfile following twelve-factor
FROM python:3.11-slim

# II. Dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

WORKDIR /app
COPY . .

# VII. Port Binding
EXPOSE 8000

# IX. Disposability - fast startup
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "4", "app:app"]
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
# app.py
import os
import logging
import sys

# XI. Logs to stdout
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logger = logging.getLogger(__name__)

# III. Config from environment
DATABASE_URL = os.environ["DATABASE_URL"]
REDIS_URL = os.environ.get("REDIS_URL")
DEBUG = os.environ.get("DEBUG", "false").lower() == "true"

# IV. Backing services as resources
from sqlalchemy import create_engine
engine = create_engine(DATABASE_URL)

# VI. Stateless processes
# Session state in Redis, not memory
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# kubernetes deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 3  # VIII. Concurrency
  template:
    spec:
      containers:
      - name: myapp
        image: myapp:v1.2.3  # V. Immutable release
        ports:
        - containerPort: 8000  # VII. Port binding
        env:
        - name: DATABASE_URL  # III. Config
          valueFrom:
            secretKeyRef:
              name: myapp-secrets
              key: database-url
        readinessProbe:  # IX. Disposability
          httpGet:
            path: /health
            port: 8000

The twelve factors aren’t about any specific technology. They’re about building applications that fit modern deployment patterns: containerized, orchestrated, continuously delivered, horizontally scaled.

You don’t have to follow all twelve from day one. But understanding why each exists helps you make better architectural decisions. When something feels painful — manual config management, difficult scaling, environment inconsistencies — there’s probably a factor that addresses it.