Request-response is synchronous. Events are not. That difference changes everything about how you build systems.

In event-driven architecture, components communicate by producing and consuming events rather than calling each other directly. The producer doesn’t know who’s listening. The consumer doesn’t know who produced. This decoupling enables scale, resilience, and evolution that tight coupling can’t match.

Why Events?

Temporal decoupling: Producer and consumer don’t need to be online simultaneously. The order service publishes “OrderPlaced”; the shipping service processes it when ready.

Spatial decoupling: Services don’t need to know each other’s locations. They know the event bus.

Scaling independence: Add ten more consumers without touching the producer. Scale each service based on its own bottlenecks.

Failure isolation: If the notification service is down, orders still process. Notifications catch up when it recovers.

Core Patterns

Event Notification

The simplest pattern: something happened, here’s the fact.

1
2
3
4
5
6
7
8
9
{
  "eventType": "OrderPlaced",
  "eventId": "evt-123",
  "timestamp": "2026-02-16T10:30:00Z",
  "data": {
    "orderId": "order-456",
    "customerId": "cust-789"
  }
}

Consumers react however they want. The order service doesn’t dictate behavior—it announces facts.

Event-Carried State Transfer

Include enough data that consumers don’t need to call back:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
{
  "eventType": "OrderPlaced",
  "eventId": "evt-123",
  "timestamp": "2026-02-16T10:30:00Z",
  "data": {
    "orderId": "order-456",
    "customerId": "cust-789",
    "items": [
      {"sku": "WIDGET-1", "quantity": 2, "price": 29.99}
    ],
    "shippingAddress": {
      "street": "123 Main St",
      "city": "Austin",
      "state": "TX"
    },
    "total": 59.98
  }
}

Trade-off: larger payloads vs. fewer synchronous calls. For high-volume systems, the bandwidth cost is usually worth the latency savings.

Event Sourcing

Store state as a sequence of events, not a snapshot:

ACcucroruAMMMencooontcnnntoeeeauyyys]nDWDtteieaOptptpohoeesds:nirietatbdewea(dndlb(((aaaaanlmmmcaoooenuuucnnn=ettt::::1201350)0000)))

Replay events to reconstruct state. You get a complete audit trail, time-travel debugging, and the ability to rebuild read models.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
class Account {
  constructor() {
    this.balance = 0;
    this.events = [];
  }

  apply(event) {
    switch (event.type) {
      case 'AccountOpened':
        this.balance = event.initialBalance || 0;
        break;
      case 'MoneyDeposited':
        this.balance += event.amount;
        break;
      case 'MoneyWithdrawn':
        this.balance -= event.amount;
        break;
    }
    this.events.push(event);
  }

  // Reconstruct from event history
  static fromEvents(events) {
    const account = new Account();
    events.forEach(e => account.apply(e));
    return account;
  }
}

CQRS (Command Query Responsibility Segregation)

Separate write models from read models:

CommandsWriteModelEventsReadModel(s)Queries

The write side optimizes for consistency and business rules. The read side optimizes for query patterns—denormalized, pre-computed, maybe in a different database entirely.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
// Write side: domain logic
class OrderAggregate {
  placeOrder(items, customer) {
    this.validateInventory(items);
    this.validateCustomer(customer);
    return new OrderPlacedEvent(items, customer);
  }
}

// Read side: optimized for queries
class OrderReadModel {
  async handleOrderPlaced(event) {
    await this.db.query(`
      INSERT INTO orders_view 
      (order_id, customer_name, total, status, created_at)
      VALUES ($1, $2, $3, 'placed', $4)
    `, [event.orderId, event.customerName, event.total, event.timestamp]);
  }
}

Message Broker Choices

Apache Kafka: Log-based, persistent, replayable. Best for high-throughput event streams where you need replay and long retention.

RabbitMQ: Traditional message broker. Flexible routing, multiple protocols, good for task queues and RPC patterns.

AWS SQS/SNS: Managed, serverless. SQS for queues, SNS for pub/sub. Simple, scales automatically.

Redis Streams: Lightweight, fast. Good for simpler use cases or when you’re already running Redis.

Pick based on your needs:

  • Need replay? Kafka or event store
  • Need complex routing? RabbitMQ
  • Want managed? Cloud-native options
  • Need simplicity? Redis Streams or SQS

Delivery Guarantees

At-most-once: Fire and forget. Fast, but messages can be lost.

At-least-once: Retry until acknowledged. Messages might be delivered multiple times—consumers must be idempotent.

Exactly-once: The holy grail. Kafka supports it within a transaction, but end-to-end exactly-once requires careful design.

For most systems, at-least-once with idempotent consumers is the sweet spot:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
async function handleOrderPlaced(event) {
  // Idempotency: check if already processed
  const existing = await db.query(
    'SELECT 1 FROM processed_events WHERE event_id = $1',
    [event.eventId]
  );
  
  if (existing.rows.length > 0) {
    return; // Already processed, skip
  }

  await db.transaction(async (tx) => {
    // Do the work
    await tx.query('INSERT INTO shipments ...', [...]);
    
    // Mark as processed
    await tx.query(
      'INSERT INTO processed_events (event_id) VALUES ($1)',
      [event.eventId]
    );
  });
}

Ordering and Partitioning

Events for the same entity should be processed in order. Partition by entity ID:

1
2
3
4
5
6
7
8
// Kafka producer
await producer.send({
  topic: 'orders',
  messages: [{
    key: order.customerId,  // Partition key
    value: JSON.stringify(event),
  }],
});

All events for customer X go to the same partition → same consumer → ordered processing.

Error Handling

Dead letter queues catch poison messages:

MainQueueDeCaodnsLuem(teftraeirluQSruueecucaeefstserMaNnuraeltriinessp)ection

Don’t let one bad message block the entire queue. Move it aside, alert, and keep processing.

Anti-Patterns

Event soup: Too many fine-grained events. If you’re publishing ButtonClicked and MouseMoved, you’ve gone too far.

Synchronous mindset: Expecting immediate consistency. Events are eventually consistent by nature.

Payload bloat: Putting entire database rows in events. Include what consumers need, not everything you have.

Missing correlation: No way to trace a request across events. Always include correlation IDs.

Ignoring ordering: Processing events out of order when order matters. Partition correctly.

The Mental Model

Think of events like a newspaper:

  • The publisher prints the news (produces events)
  • Subscribers read what interests them (consume selectively)
  • The newspaper doesn’t know who reads it
  • Readers don’t coordinate with each other
  • Yesterday’s paper is still available (event log)

Your services become independent readers of a shared stream of facts about what happened in your system. They react according to their own logic, at their own pace, without tight coordination.

That’s the power of events: you stop building a distributed monolith and start building a system where components can evolve independently.