You refresh your page and see it: 502 Bad Gateway. Nginx is telling you something went wrong, but what? This guide covers the most common causes and how to fix each one.

What 502 Bad Gateway Actually Means

A 502 error means Nginx (acting as a reverse proxy) tried to contact your upstream server (your app) and either:

  • Couldn’t connect at all
  • Got an invalid response
  • The connection timed out

Nginx is working fine. Your upstream is the problem.

Quick Diagnosis Checklist

Before diving deep, run through these:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Is your upstream running?
systemctl status your-app
# or
docker ps | grep your-app

# Can you reach it directly?
curl -v http://localhost:3000/health

# What do the logs say?
tail -50 /var/log/nginx/error.log
journalctl -u your-app --since "5 minutes ago"

Cause 1: Upstream Is Down

The most common cause. Your app crashed, hasn’t started, or failed silently.

Symptoms:

connect()failed(111:Connectionrefused)whileconnectingtoupstream

Fix:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# Check if the process exists
pgrep -f your-app

# Restart it
systemctl restart your-app
# or
docker restart your-container

# Watch the logs for startup errors
journalctl -u your-app -f

Prevention: Use a process manager (systemd, PM2, supervisord) with automatic restart.

Cause 2: Wrong Upstream Address

Your Nginx config points to the wrong host or port.

Symptoms:

  • 502 immediately on every request
  • Upstream shows as healthy when you check directly

Check your config:

1
2
3
4
5
6
7
8
# /etc/nginx/sites-available/your-site
upstream backend {
    server 127.0.0.1:3000;  # Is this correct?
}

location / {
    proxy_pass http://backend;
}

Common mistakes:

  • Using localhost instead of 127.0.0.1 (DNS resolution issues)
  • Port mismatch (app runs on 8080, config says 3000)
  • Using http:// when app expects raw TCP

Fix: Verify the port your app actually binds to:

1
2
3
ss -tlnp | grep your-app
# or
netstat -tlnp | grep LISTEN

Cause 3: Docker Networking Issues

Running Nginx on the host but your app in Docker? They can’t talk via localhost.

Symptoms:

connect()failed(111:Connectionrefused)

…but the container is running and healthy.

The problem: From the host, localhost means the host. From inside a container, localhost means that container. They’re isolated.

Fix Option 1: Use the Docker bridge IP

1
2
3
4
5
6
7
# Find the container's IP
docker inspect your-container | grep IPAddress

# Update Nginx config
upstream backend {
    server 172.17.0.2:3000;
}

Fix Option 2: Use host networking (if appropriate)

1
docker run --network host your-image

Fix Option 3: Run Nginx in Docker too Put both in the same Docker network:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
# docker-compose.yml
services:
  nginx:
    image: nginx
    networks:
      - app-network
  
  app:
    image: your-app
    networks:
      - app-network

networks:
  app-network:

Then use the service name as hostname:

1
2
3
upstream backend {
    server app:3000;
}

Cause 4: Upstream Timeout

Your app takes too long to respond. Nginx gives up.

Symptoms:

upstreamtimedout(110:Connectiontimedout)whilereadingresponseheader

Quick fix - increase timeouts:

1
2
3
4
5
6
location / {
    proxy_pass http://backend;
    proxy_connect_timeout 60s;
    proxy_send_timeout 60s;
    proxy_read_timeout 60s;
}

Better fix: Figure out why your app is slow. Check:

  • Database queries (add indexes, optimize queries)
  • External API calls (add timeouts, async processing)
  • Memory issues (app swapping to disk)

Cause 5: Too Many Open Files

Under high load, you hit system limits.

Symptoms:

socket()failed(24:Toomanyopenfiles)

Fix:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# Check current limits
ulimit -n

# Increase in Nginx config
worker_rlimit_nofile 65535;

events {
    worker_connections 4096;
}

# Also increase system-wide
echo "* soft nofile 65535" >> /etc/security/limits.conf
echo "* hard nofile 65535" >> /etc/security/limits.conf

Cause 6: SELinux Blocking Connections

On RHEL/CentOS, SELinux may block Nginx from connecting to upstream ports.

Symptoms:

  • Everything looks correct
  • Works when you disable SELinux
  • Logs show permission denied

Fix:

1
2
3
4
5
6
7
8
# Check if SELinux is the problem
ausearch -m avc -ts recent | grep nginx

# Allow Nginx to connect to any port
setsebool -P httpd_can_network_connect 1

# Or allow specific port
semanage port -a -t http_port_t -p tcp 3000

Debugging Template

When you hit a 502, run through this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
#!/bin/bash
echo "=== Nginx Status ==="
systemctl status nginx

echo "=== Upstream Status ==="
systemctl status your-app

echo "=== Port Bindings ==="
ss -tlnp | grep -E ':(80|443|3000)'

echo "=== Recent Nginx Errors ==="
tail -20 /var/log/nginx/error.log

echo "=== Direct Upstream Test ==="
curl -v http://127.0.0.1:3000/health

The 30-Second Fix

If you need it working NOW:

1
2
3
4
5
6
7
# Restart everything
systemctl restart your-app
sleep 5
systemctl restart nginx

# Still broken? Check the logs
tail -f /var/log/nginx/error.log

Most 502s are caused by a crashed upstream or misconfigured address. Start there.