Serverless doesn’t mean no servers. It means someone else’s servers, managed so well you don’t have to think about them. Here’s when that trade-off makes sense.
What Serverless Actually Is# Serverless is a cloud execution model where:
You deploy functions , not servers You pay per execution , not uptime Scaling is automatic and (theoretically) infinite The provider handles all infrastructure The big players: AWS Lambda, Google Cloud Functions, Azure Functions, Cloudflare Workers.
A Simple Lambda Function# 1
2
3
4
5
6
7
8
9
10
11
12
# lambda_function.py
import json
def lambda_handler ( event , context ):
name = event . get ( 'name' , 'World' )
return {
'statusCode' : 200 ,
'body' : json . dumps ({
'message' : f 'Hello, { name } !'
})
}
Deploy with AWS CLI:
1
2
3
4
5
6
7
8
zip function .zip lambda_function.py
aws lambda create-function \
--function-name hello-world \
--runtime python3.11 \
--handler lambda_function.lambda_handler \
--role arn:aws:iam::123456789:role/lambda-role \
--zip-file fileb://function.zip
That’s it. No EC2 instances, no load balancers, no auto-scaling groups.
When Serverless Shines# 1. Event-Driven Workloads# Perfect for reacting to events:
1
2
3
4
5
6
7
8
# Process S3 uploads
def lambda_handler ( event , context ):
for record in event [ 'Records' ]:
bucket = record [ 's3' ][ 'bucket' ][ 'name' ]
key = record [ 's3' ][ 'object' ][ 'key' ]
# Process the uploaded file
process_image ( bucket , key )
Trigger configuration:
1
2
3
4
5
6
7
8
# serverless.yml (Serverless Framework)
functions :
processImage :
handler : handler.lambda_handler
events :
- s3 :
bucket : my-uploads
event : s3:ObjectCreated:*
2. APIs with Variable Traffic# Traffic that spikes unpredictably:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# API endpoint
def lambda_handler ( event , context ):
http_method = event [ 'httpMethod' ]
path = event [ 'path' ]
if http_method == 'GET' and path == '/users' :
users = get_users ()
return { 'statusCode' : 200 , 'body' : json . dumps ( users )}
if http_method == 'POST' and path == '/users' :
body = json . loads ( event [ 'body' ])
user = create_user ( body )
return { 'statusCode' : 201 , 'body' : json . dumps ( user )}
return { 'statusCode' : 404 , 'body' : 'Not found' }
3. Scheduled Tasks# Cron jobs without servers:
1
2
3
4
5
6
7
8
9
10
functions :
dailyReport :
handler : reports.daily
events :
- schedule : cron(0 9 * * ? *) # 9 AM daily
weeklyCleanup :
handler : cleanup.weekly
events :
- schedule : rate(7 days)
4. Data Processing Pipelines# Chain functions together:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# Step 1: Validate
def validate ( event , context ):
data = event [ 'data' ]
if is_valid ( data ):
return { 'status' : 'valid' , 'data' : data }
raise ValueError ( 'Invalid data' )
# Step 2: Transform
def transform ( event , context ):
data = event [ 'data' ]
return { 'data' : transform_data ( data )}
# Step 3: Load
def load ( event , context ):
data = event [ 'data' ]
save_to_database ( data )
return { 'status' : 'complete' }
Orchestrate with Step Functions:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
"StartAt" : "Validate" ,
"States" : {
"Validate" : {
"Type" : "Task" ,
"Resource" : "arn:aws:lambda:...:validate" ,
"Next" : "Transform"
},
"Transform" : {
"Type" : "Task" ,
"Resource" : "arn:aws:lambda:...:transform" ,
"Next" : "Load"
},
"Load" : {
"Type" : "Task" ,
"Resource" : "arn:aws:lambda:...:load" ,
"End" : true
}
}
}
When Serverless Hurts# 1. Long-Running Processes# Lambda has a 15-minute timeout. If your job takes longer:
1
2
3
# Bad: Will timeout
def lambda_handler ( event , context ):
process_million_records () # Takes 30 minutes
Solution : Break into chunks, use Step Functions, or use containers (ECS/Fargate).
2. Cold Starts Matter# First invocation after idle period is slow (100ms-2s):
1
2
# Cold start: ~500ms
# Warm start: ~10ms
Mitigation :
Provisioned concurrency (costs money) Keep functions warm with scheduled pings Use lighter runtimes (Python/Node vs Java) 3. Consistent High Traffic# If you’re running 24/7 at high volume, serverless gets expensive:
L E a C m 2 b d t a 3 : . m 1 i M + c r r 1 o e M : q u × ~ e $ s 2 7 t 0 . s 0 5 m 0 × s / m $ × o 0 n . 1 t 2 2 h 0 8 / M ( 1 B h M a = n = d ~ l $ $ e 0 0 s . . 2 4 w 0 0 a y m o r e )
At scale, reserved EC2 or containers win on cost.
4. Complex Local Development# Testing Lambda locally is painful:
1
2
3
4
5
# SAM Local helps, but it's not perfect
sam local invoke HelloFunction -e event.json
# Or use localstack
docker run -p 4566:4566 localstack/localstack
Real-World Architecture# API + Database# A P I G a t e w a y → C l L o a u m ↓ d b W d a a t c → h D y n a m o D B
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import boto3
dynamodb = boto3 . resource ( 'dynamodb' )
table = dynamodb . Table ( 'users' )
def lambda_handler ( event , context ):
user_id = event [ 'pathParameters' ][ 'id' ]
response = table . get_item ( Key = { 'id' : user_id })
if 'Item' not in response :
return { 'statusCode' : 404 }
return {
'statusCode' : 200 ,
'body' : json . dumps ( response [ 'Item' ])
}
Event-Driven Processing# S 3 U p l o a d → S N L S a m ↓ ( b n d o a t i → f i S c Q a S t i → o n L s a ) m b d a → D y n a m o D B
Hybrid: Serverless + Containers# A P I G a t e w a y → → L A a L m B b d → a E ( C l S i g ( h h t e a e v n y d p p o r i o n c t e s s ) s i n g )
Deployment with Infrastructure as Code# 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# Terraform
resource "aws_lambda_function" "api" {
filename = "function.zip"
function_name = "my-api"
role = aws_iam_role . lambda . arn
handler = "main.handler"
runtime = "python3.11"
environment {
variables = {
TABLE_NAME = aws_dynamodb_table . main . name
}
}
}
resource "aws_api_gateway_rest_api" "api" {
name = "my-api"
}
resource "aws_lambda_permission" "api" {
statement_id = "AllowAPIGateway"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function . api . function_name
principal = "apigateway.amazonaws.com"
}
The Decision Framework# Choose serverless when:
Traffic is unpredictable or bursty You want zero ops overhead Workloads are event-driven You’re building MVPs fast Choose containers/VMs when:
Traffic is consistent and high You need long-running processes Cold starts are unacceptable Cost optimization matters at scale Hybrid is often the answer:
Serverless for glue, events, APIs Containers for heavy lifting Managed services where possible The Bottom Line# Serverless is a tool, not a religion. It’s fantastic for the right workloads and expensive/frustrating for the wrong ones.
Start serverless for new projects. It’s the fastest path from idea to production. Optimize later when you understand your actual traffic patterns.
Building serverless? Running into issues? Find me on Twitter .