Your systemd service is “active (running)” but the application isn’t responding. No errors in systemctl status. The journal shows it started. Everything looks fine.
Except it isn’t.
This is one of the most frustrating debugging scenarios in Linux administration. Here’s how to actually figure out what’s wrong.
The Problem: Green Status, Dead Application#
1
2
3
4
5
| $ systemctl status myapp
● myapp.service - My Application
Loaded: loaded (/etc/systemd/system/myapp.service; enabled)
Active: active (running) since Fri 2026-03-20 10:00:00 EDT; 2h ago
Main PID: 12345 (myapp)
|
Looks healthy. But curl localhost:8080 times out. What’s happening?
Step 1: Check If the Process Is Actually Running#
The PID systemd shows might be a zombie or stuck process:
1
2
3
4
5
6
7
8
9
10
11
| # Is the process actually there?
ps aux | grep 12345
# What state is it in?
cat /proc/12345/status | grep State
# State: S (sleeping) = normal
# State: D (disk sleep) = stuck on I/O, bad sign
# State: Z (zombie) = dead but not reaped, very bad
# What's it doing right now?
strace -p 12345 -f 2>&1 | head -50
|
If strace shows the process stuck on a single syscall (like futex or epoll_wait with no activity), your application is hung.
Step 2: Check What the Application Is Logging#
systemd captures stdout/stderr, but applications often log elsewhere:
1
2
3
4
5
6
7
8
9
| # Journal logs (what systemd captured)
journalctl -u myapp -n 100 --no-pager
# But also check application-specific logs
ls -la /var/log/myapp/
tail -100 /var/log/myapp/error.log
# Or if it logs to syslog
grep myapp /var/log/syslog | tail -50
|
Common gotcha: Your service runs as a different user who can’t write to the log directory. The app fails silently because it can’t even log the failure.
1
2
3
4
5
| # Check who owns the log directory
ls -la /var/log/ | grep myapp
# Check who the service runs as
grep -E "^User=" /etc/systemd/system/myapp.service
|
Step 3: Environment Variables Are Often the Culprit#
systemd services run in a minimal environment. That $PATH you rely on? Gone. That $DATABASE_URL you exported? Not inherited.
1
2
3
4
5
6
7
| # See what environment the service actually has
cat /proc/12345/environ | tr '\0' '\n'
# Compare to your shell
env | sort > /tmp/shell_env
cat /proc/12345/environ | tr '\0' '\n' | sort > /tmp/service_env
diff /tmp/shell_env /tmp/service_env
|
The fix: Explicitly set environment in the unit file:
1
2
3
4
5
6
| [Service]
Environment="DATABASE_URL=postgres://localhost/mydb"
Environment="PATH=/usr/local/bin:/usr/bin:/bin"
# Or load from a file
EnvironmentFile=/etc/myapp/env
|
Step 4: Working Directory Matters#
Your app might be looking for config files relative to the working directory:
1
2
3
4
5
| # Where does systemd start your process?
grep -E "^WorkingDirectory=" /etc/systemd/system/myapp.service
# If not set, it defaults to /
# Your app looking for ./config.yaml? It's checking /config.yaml
|
The fix:
1
2
| [Service]
WorkingDirectory=/opt/myapp
|
Step 5: Resource Limits#
systemd imposes default limits that might be too restrictive:
1
2
3
4
5
6
| # Check current limits for the process
cat /proc/12345/limits
# Common culprits:
# - Max open files (need more for many connections)
# - Max processes (fork bombs protection can bite you)
|
The fix:
1
2
3
| [Service]
LimitNOFILE=65535
LimitNPROC=4096
|
Step 6: The Nuclear Option - Manual Execution#
When all else fails, run the exact command systemd runs, as the same user:
1
2
3
4
5
6
7
8
| # Find the exact command
grep -E "^ExecStart=" /etc/systemd/system/myapp.service
# Find the user
grep -E "^User=" /etc/systemd/system/myapp.service
# Run it manually
sudo -u myappuser /opt/myapp/bin/myapp --config /etc/myapp/config.yaml
|
Now you’ll see the actual error output that systemd was swallowing.
The Debugging Unit File#
Add this to your service for better debugging:
1
2
3
4
5
6
7
8
9
10
11
12
| [Service]
# Send stdout/stderr to journal with priority
StandardOutput=journal
StandardError=journal
SyslogIdentifier=myapp
# Don't restart immediately on failure - let us see what happened
Restart=on-failure
RestartSec=10
# Increase logging verbosity if your app supports it
Environment="LOG_LEVEL=debug"
|
Then reload and restart:
1
2
3
| sudo systemctl daemon-reload
sudo systemctl restart myapp
journalctl -u myapp -f # Watch live
|
Quick Diagnostic Script#
Save this as debug-service.sh:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| #!/bin/bash
SERVICE=$1
PID=$(systemctl show -p MainPID --value $SERVICE)
echo "=== Service Status ==="
systemctl status $SERVICE --no-pager
echo -e "\n=== Process State ==="
cat /proc/$PID/status 2>/dev/null | grep -E "^(State|Threads|VmRSS):" || echo "Process not found"
echo -e "\n=== Recent Logs ==="
journalctl -u $SERVICE -n 20 --no-pager
echo -e "\n=== Open Files ==="
ls -la /proc/$PID/fd 2>/dev/null | wc -l || echo "Can't read"
echo -e "\n=== Environment (first 10) ==="
cat /proc/$PID/environ 2>/dev/null | tr '\0' '\n' | head -10 || echo "Can't read"
|
Usage: ./debug-service.sh myapp
The Real Lesson#
systemd’s “active (running)” only means the process started and hasn’t exited. It says nothing about whether your application is healthy, responding, or doing what it should.
For production services, add a health check:
1
2
| [Service]
ExecStartPost=/usr/bin/curl --retry 5 --retry-delay 2 http://localhost:8080/health
|
Now systemd will mark the service as failed if it doesn’t become healthy — and you’ll know something’s wrong before your users do.