Network failures happen. Clients time out. Users double-click buttons. In distributed systems, handling duplicate requests isn’t optional—it’s essential.
What Is Idempotency?#
An operation is idempotent if performing it multiple times produces the same result as performing it once. For APIs, this means:
GET /users/123 → Always returns user 123’s data (naturally idempotent)DELETE /users/123 → First call deletes, subsequent calls return 404 (idempotent)POST /orders → Creates a new order each time (NOT idempotent)
The problem? POST and PATCH requests are inherently non-idempotent, but they’re also the most critical to get right.
The Idempotency Key Pattern#
The standard solution is client-generated idempotency keys:
1
2
3
4
5
6
7
8
9
10
| // Client request
POST /api/v1/payments
Idempotency-Key: payment_abc123_1708012800
Content-Type: application/json
{
"amount": 5000,
"currency": "USD",
"recipient": "acc_xyz"
}
|
Server implementation in Node.js:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
| const Redis = require('ioredis');
const redis = new Redis();
async function handlePayment(req, res) {
const idempotencyKey = req.headers['idempotency-key'];
if (!idempotencyKey) {
return res.status(400).json({ error: 'Idempotency-Key header required' });
}
const redisKey = `idempotency:${idempotencyKey}`;
// Check if we've seen this request before
const cached = await redis.get(redisKey);
if (cached) {
const { status, body, fingerprint } = JSON.parse(cached);
// Verify the request body matches (prevent key reuse for different requests)
const currentFingerprint = hash(req.body);
if (fingerprint !== currentFingerprint) {
return res.status(422).json({
error: 'Idempotency key already used with different request body'
});
}
return res.status(status).json(body);
}
// Process the payment
const result = await processPayment(req.body);
// Cache the response (24 hours TTL)
await redis.setex(redisKey, 86400, JSON.stringify({
status: 201,
body: result,
fingerprint: hash(req.body)
}));
return res.status(201).json(result);
}
|
Handling In-Flight Requests#
What if the same request arrives while the first is still processing? You’ll create duplicates. Use a lock:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
| async function handleWithLock(req, res) {
const idempotencyKey = req.headers['idempotency-key'];
const lockKey = `lock:${idempotencyKey}`;
const resultKey = `result:${idempotencyKey}`;
// Try to acquire lock (NX = only if not exists, EX = 30 second timeout)
const acquired = await redis.set(lockKey, '1', 'NX', 'EX', 30);
if (!acquired) {
// Another request is processing - wait and check for result
let attempts = 0;
while (attempts < 10) {
await sleep(500);
const result = await redis.get(resultKey);
if (result) {
const { status, body } = JSON.parse(result);
return res.status(status).json(body);
}
attempts++;
}
return res.status(409).json({ error: 'Request in progress, try again' });
}
try {
// Check for cached result first
const cached = await redis.get(resultKey);
if (cached) {
const { status, body } = JSON.parse(cached);
return res.status(status).json(body);
}
// Process the request
const result = await processPayment(req.body);
// Cache result
await redis.setex(resultKey, 86400, JSON.stringify({
status: 201,
body: result
}));
return res.status(201).json(result);
} finally {
// Always release lock
await redis.del(lockKey);
}
}
|
Database-Level Idempotency#
When Redis isn’t available, use database constraints:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| -- PostgreSQL with unique constraint
CREATE TABLE payment_requests (
idempotency_key VARCHAR(255) PRIMARY KEY,
payment_id UUID NOT NULL,
request_body JSONB NOT NULL,
response_body JSONB,
status VARCHAR(50) DEFAULT 'processing',
created_at TIMESTAMP DEFAULT NOW(),
completed_at TIMESTAMP
);
-- Attempt to insert (will fail if key exists)
INSERT INTO payment_requests (idempotency_key, payment_id, request_body)
VALUES ($1, $2, $3)
ON CONFLICT (idempotency_key) DO NOTHING
RETURNING *;
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
| async function handleWithDatabase(req, res) {
const idempotencyKey = req.headers['idempotency-key'];
const paymentId = uuidv4();
const result = await db.query(`
INSERT INTO payment_requests (idempotency_key, payment_id, request_body)
VALUES ($1, $2, $3)
ON CONFLICT (idempotency_key) DO NOTHING
RETURNING *
`, [idempotencyKey, paymentId, req.body]);
if (result.rowCount === 0) {
// Key exists - fetch the existing result
const existing = await db.query(
'SELECT * FROM payment_requests WHERE idempotency_key = $1',
[idempotencyKey]
);
if (existing.rows[0].status === 'processing') {
return res.status(409).json({ error: 'Request in progress' });
}
return res.status(200).json(existing.rows[0].response_body);
}
// Process payment...
const paymentResult = await processPayment(req.body);
// Update with result
await db.query(`
UPDATE payment_requests
SET response_body = $1, status = 'completed', completed_at = NOW()
WHERE idempotency_key = $2
`, [paymentResult, idempotencyKey]);
return res.status(201).json(paymentResult);
}
|
Client-Side Best Practices#
Clients should generate robust idempotency keys:
1
2
3
4
5
6
7
8
9
10
11
12
13
| // Good: Includes context that makes it unique
const idempotencyKey = `order_${userId}_${cartHash}_${Date.now()}`;
// Better: UUID v4 stored client-side before first attempt
const idempotencyKey = localStorage.getItem('pending_order_key')
|| (localStorage.setItem('pending_order_key', uuidv4()),
localStorage.getItem('pending_order_key'));
// Best: Content-based hash (same content = same key)
const idempotencyKey = crypto
.createHash('sha256')
.update(JSON.stringify({ userId, items, timestamp: Math.floor(Date.now() / 60000) }))
.digest('hex');
|
Key Implementation Details#
TTL: Keep idempotency records for 24-48 hours. Long enough for retries, short enough to not accumulate forever.
Request fingerprinting: Always hash and compare request bodies. The same idempotency key with different parameters should fail.
Response replay: Return the exact cached response including status code. A 201 on first request should remain 201 on replay.
Error handling: Don’t cache 5xx errors—the client should be able to retry after transient failures.
1
2
3
4
5
6
| // Only cache successful responses
if (response.status >= 200 && response.status < 300) {
await cacheResponse(idempotencyKey, response);
}
// For errors, delete the lock but don't cache
await redis.del(lockKey);
|
Testing Idempotency#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
| describe('Idempotent API', () => {
it('returns same response for duplicate requests', async () => {
const key = 'test-key-' + Date.now();
const first = await api.post('/payments', body, {
headers: { 'Idempotency-Key': key }
});
const second = await api.post('/payments', body, {
headers: { 'Idempotency-Key': key }
});
expect(first.data.id).toBe(second.data.id);
expect(first.status).toBe(second.status);
});
it('rejects key reuse with different body', async () => {
const key = 'reuse-test-' + Date.now();
await api.post('/payments', { amount: 100 }, {
headers: { 'Idempotency-Key': key }
});
const reuse = await api.post('/payments', { amount: 200 }, {
headers: { 'Idempotency-Key': key }
});
expect(reuse.status).toBe(422);
});
});
|
Summary#
Idempotent APIs are crucial for reliability:
- Require idempotency keys for state-changing operations
- Use distributed locks to prevent race conditions
- Fingerprint requests to detect key reuse abuse
- Cache responses with appropriate TTLs
- Replay exact responses including status codes
The extra complexity upfront saves countless hours debugging duplicate payments, orders, or any other operation that should only happen once.
Building reliable distributed systems? Check out our other posts on retry patterns and graceful shutdown.