Traditional request-response architectures work well until they don’t. When your services grow, synchronous calls create tight coupling, cascading failures, and bottlenecks. Event-driven architecture (EDA) offers an alternative: systems that react to changes rather than constantly polling for them.

What Is Event-Driven Architecture?

In EDA, components communicate through events — immutable records of something that happened. Instead of Service A calling Service B directly, Service A publishes an event, and any interested services subscribe to it.

TErvaednitt-iDorniavle:n:UsUesrerSeSrevrivciece[OIAH[rnnTEdvaTveelPerny]ntttSoi:ercrysOOvrriSSddceeeeerrrrvvCiiSrccee[eerasvtuiebcds[[e]cssruuibbbss[MeccHesrrTs]iiTsbbPaee]gsse]]BIrnovkeenrtoryService

The key difference: the publisher doesn’t know or care who’s listening.

Core Patterns

1. Event Notification

The simplest pattern. A service publishes that something happened, including minimal data:

1
2
3
4
5
6
7
8
9
{
  "event_type": "user.created",
  "event_id": "evt_abc123",
  "timestamp": "2026-02-11T04:00:00Z",
  "data": {
    "user_id": "usr_789",
    "email": "user@example.com"
  }
}

Consumers react however they need to. The email service sends a welcome message. The analytics service increments a counter. Neither needs to coordinate with the other.

2. Event-Carried State Transfer

Include enough data in the event that consumers don’t need to call back:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
{
  "event_type": "order.shipped",
  "data": {
    "order_id": "ord_456",
    "customer": {
      "id": "usr_789",
      "email": "user@example.com",
      "shipping_address": {
        "street": "123 Main St",
        "city": "Springfield",
        "zip": "12345"
      }
    },
    "tracking_number": "1Z999AA10123456784",
    "carrier": "UPS"
  }
}

This reduces coupling further — the notification service has everything it needs without querying the customer service.

3. Event Sourcing

Store state as a sequence of events rather than current values:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Instead of storing: {"balance": 150}
# Store the events that led there:

events = [
    {"type": "account.opened", "initial_balance": 100, "timestamp": "..."},
    {"type": "deposit.made", "amount": 75, "timestamp": "..."},
    {"type": "withdrawal.made", "amount": 25, "timestamp": "..."},
]

# Current state = replay all events
# balance = 100 + 75 - 25 = 150

This gives you a complete audit trail and the ability to rebuild state at any point in time.

Implementing With Message Brokers

Redis Streams (Simple, Fast)

Great for getting started. Events are stored in append-only logs:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import redis

r = redis.Redis()

# Producer
r.xadd('orders', {
    'event_type': 'order.created',
    'order_id': 'ord_123',
    'total': '99.99'
})

# Consumer (with consumer groups for load balancing)
r.xgroup_create('orders', 'order-processors', mkstream=True)

while True:
    events = r.xreadgroup(
        'order-processors', 
        'worker-1',
        {'orders': '>'},
        count=10,
        block=5000
    )
    for stream, messages in events:
        for msg_id, data in messages:
            process_order(data)
            r.xack('orders', 'order-processors', msg_id)

Apache Kafka (Scale, Durability)

For high-throughput, persistent event streams:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
from kafka import KafkaProducer, KafkaConsumer
import json

# Producer
producer = KafkaProducer(
    bootstrap_servers=['kafka:9092'],
    value_serializer=lambda v: json.dumps(v).encode()
)

producer.send('orders', {
    'event_type': 'order.created',
    'order_id': 'ord_123',
    'items': [{'sku': 'ABC', 'qty': 2}]
})

# Consumer
consumer = KafkaConsumer(
    'orders',
    bootstrap_servers=['kafka:9092'],
    group_id='inventory-service',
    value_deserializer=lambda m: json.loads(m.decode())
)

for message in consumer:
    event = message.value
    if event['event_type'] == 'order.created':
        reserve_inventory(event['items'])

AWS EventBridge (Managed, Serverless)

Route events between AWS services and your applications:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
import boto3
import json

events = boto3.client('events')

# Publish event
events.put_events(Entries=[{
    'Source': 'myapp.orders',
    'DetailType': 'Order Created',
    'Detail': json.dumps({
        'order_id': 'ord_123',
        'customer_id': 'cust_456'
    }),
    'EventBusName': 'my-app-bus'
}])

Then create rules to route events to Lambda, SQS, or other targets.

Handling Failures

Events don’t solve distributed systems problems — they change them.

Idempotency Is Non-Negotiable

Events may be delivered more than once. Design consumers to handle duplicates:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
def process_order(event):
    order_id = event['order_id']
    
    # Check if already processed
    if redis.sismember('processed_orders', order_id):
        return  # Skip duplicate
    
    # Process the order
    create_shipment(event)
    
    # Mark as processed (with TTL for cleanup)
    redis.sadd('processed_orders', order_id)
    redis.expire('processed_orders', 86400 * 7)  # 7 days

Dead Letter Queues

When processing fails repeatedly, don’t lose the event:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
MAX_RETRIES = 3

def consume_with_retry(event, retry_count=0):
    try:
        process_event(event)
    except Exception as e:
        if retry_count < MAX_RETRIES:
            # Requeue with backoff
            delay = 2 ** retry_count
            schedule_retry(event, delay, retry_count + 1)
        else:
            # Send to dead letter queue for manual review
            send_to_dlq(event, error=str(e))
            alert_ops_team(event, e)

Saga Pattern for Distributed Transactions

When multiple services need to coordinate:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# Order Saga: Orchestrator approach
class OrderSaga:
    def execute(self, order):
        try:
            # Step 1: Reserve inventory
            publish('inventory.reserve', order)
            wait_for('inventory.reserved', timeout=30)
            
            # Step 2: Process payment
            publish('payment.process', order)
            wait_for('payment.completed', timeout=30)
            
            # Step 3: Confirm order
            publish('order.confirmed', order)
            
        except TimeoutError:
            # Compensate: undo previous steps
            self.compensate(order)
    
    def compensate(self, order):
        publish('inventory.release', order)
        publish('payment.refund', order)
        publish('order.cancelled', order)

When to Use (and When Not To)

Good fits for EDA:

  • Multiple consumers need the same data
  • You need temporal decoupling (producer doesn’t wait for consumer)
  • Event history/audit trails matter
  • Systems need to scale independently

Stick with request-response when:

  • You need immediate, synchronous responses
  • The interaction is truly point-to-point
  • Debugging simplicity outweighs flexibility
  • You’re just getting started (add complexity later)

Getting Started

  1. Start with one event type — pick something high-value like “order created” or “user signed up”
  2. Use your existing infrastructure — Redis Streams or a simple Postgres-backed queue works fine initially
  3. Define event schemas — use JSON Schema or Avro to prevent drift
  4. Add observability — trace events through your system from the start
  5. Plan for replay — store events durably enough that you can reprocess if needed

Event-driven architecture isn’t a silver bullet, but it’s a powerful tool for building systems that grow gracefully. The key is starting simple and adding sophistication as your needs evolve.