How to View and Analyze Nginx Logs

30.04.2026
18:53

Nginx keeps two logs: access.log records every incoming request, error.log captures everything that went wrong. Together they tell you exactly what's happening on your server — who's hitting it, what they're requesting, what response they got, how long it took, and why things break.

Knowing how to read these logs is table stakes for anyone running web servers. Site goes down — open error.log. Suspicious traffic spike — dig into access.log. Pages loading slow — grep for requests with high response times. This guide covers the full toolkit: from a quick tail -f to automated analytics with GoAccess and centralized log shipping.

Where Nginx Logs Live

By default, Nginx writes to:

  • /var/log/nginx/access.log — request log
  • /var/log/nginx/error.log — error log

These are the standard paths on Ubuntu, Debian, and CentOS. Custom builds or non-standard configurations may use different locations.

Finds where logs are actually configured across all Nginx config files:

grep -r "access_log\|error_log" /etc/nginx/

If you've set up per-virtualhost logs (the right way to do it), you'll see something like:

/etc/nginx/sites-enabled/example.com:    access_log /var/log/nginx/example.com-access.log;
/etc/nginx/sites-enabled/example.com:    error_log  /var/log/nginx/example.com-error.log;

Lists all Nginx log files with sizes:

ls -lh /var/log/nginx/

Understanding the access.log Format

Before you can analyze logs, you need to know what you're looking at. Each line in access.log represents one HTTP request.

Shows the last 5 lines of access.log:

tail -5 /var/log/nginx/access.log

A typical line looks like this:

192.168.1.1 - john [30/Apr/2026:14:23:01 +0300] "GET /api/users HTTP/1.1" 200 1543 "https://example.com/dashboard" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"

Breaking it down field by field:

Field Example What it means
$remote_addr 192.168.1.1 Client IP address
$remote_user john Username if HTTP auth is used, - otherwise
$time_local 30/Apr/2026:14:23:01 +0300 Request timestamp
$request GET /api/users HTTP/1.1 HTTP method, URI, protocol version
$status 200 HTTP response code
$body_bytes_sent 1543 Response size in bytes (excluding headers)
$http_referer https://example.com/dashboard Where the request came from
$http_user_agent Mozilla/5.0 ... Browser or client identifier

This is the combined format — the default for most Nginx installations. It's defined in the config via the log_format combined directive.

Checks which log format is active:

grep -A5 "log_format" /etc/nginx/nginx.conf

Viewing Logs in Real Time

Watching the live request stream

Shows the last 50 lines and follows new entries as they appear:

tail -f /var/log/nginx/access.log

Tails both access and error logs simultaneously:

tail -f /var/log/nginx/access.log -f /var/log/nginx/error.log

Shows the last N lines (say, 200):

tail -200 /var/log/nginx/access.log

Filtering the live stream

Shows only errors (4xx and 5xx) in real time:

tail -f /var/log/nginx/access.log | grep -E '" [45][0-9]{2} '

Streams only requests from a specific IP:

tail -f /var/log/nginx/access.log | grep "192.168.1.100"

Shows only POST requests as they come in:

tail -f /var/log/nginx/access.log | grep '"POST '

Navigating large files with less

less is the right tool when you need to scroll through a large file without loading it all into memory.

Opens a log file with navigation:

less /var/log/nginx/access.log

Navigation: G jumps to end, g to the beginning, /pattern searches forward, ?pattern searches backward, n for next match, q to quit.

Opens the file in follow mode (like tail -f but inside less):

less +F /var/log/nginx/access.log

Ctrl+C returns to navigation mode from follow mode.

Analyzing access.log with grep and awk

Filtering by response code

Counts requests per response code, sorted by frequency:

awk '{print $9}' /var/log/nginx/access.log | sort | uniq -c | sort -rn

Shows every request that returned a 500:

grep '" 500 ' /var/log/nginx/access.log

Pulls all 5xx errors with URL and timestamp:

grep -E '" 5[0-9]{2} ' /var/log/nginx/access.log | awk '{print $4, $7, $9}'

Counts 4xx errors in the last hour:

awk -v d="$(date +'%d/%b/%Y:%H')" '$4 ~ d && $9 ~ /^4/' /var/log/nginx/access.log | wc -l

IP address analysis

Top 20 IPs by request count — your most active visitors (and potential abusers):

awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -20

Shows every request from a specific IP:

grep "^203.0.113.42 " /var/log/nginx/access.log

Counts total unique visitors:

awk '{print $1}' /var/log/nginx/access.log | sort -u | wc -l

Flags IPs with more than 1,000 requests — potential bots or DDoS sources:

awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | awk '$1 > 1000'

URL analysis

Top 20 most requested URLs:

awk '{print $7}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -20

Top 404 URLs — pages people can't find (broken links, missing resources):

grep '" 404 ' /var/log/nginx/access.log | awk '{print $7}' | sort | uniq -c | sort -rn | head -20

Strips query strings to group URLs by path only:

awk '{print $7}' /var/log/nginx/access.log | cut -d'?' -f1 | sort | uniq -c | sort -rn | head -20

Traffic over time

Request count by hour — see when your traffic peaks:

awk '{print $4}' /var/log/nginx/access.log | cut -d: -f2 | sort | uniq -c

Request count by minute over the last hour:

awk '{print $4}' /var/log/nginx/access.log | cut -d: -f1-3 | sort | uniq -c | tail -60

Finds your busiest minute — useful for identifying attack windows:

awk '{print $4}' /var/log/nginx/access.log | cut -d: -f1-3 | sort | uniq -c | sort -rn | head -5

Bandwidth analysis

Calculates total bytes served:

awk '{sum += $10} END {print sum " bytes"}' /var/log/nginx/access.log

Same thing in megabytes:

awk '{sum += $10} END {print sum/1024/1024 " MB"}' /var/log/nginx/access.log

Top bandwidth-consuming URLs — where your traffic is actually going:

awk '{traffic[$7] += $10} END {for (url in traffic) print traffic[url], url}' /var/log/nginx/access.log | sort -rn | head -20

User-Agent analysis

Top browsers and bots by request count:

awk -F'"' '{print $6}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -20

Shows only crawler activity (Googlebot and friends):

grep -i "bot\|crawler\|spider" /var/log/nginx/access.log | awk '{print $1}' | sort | uniq -c | sort -rn

Counts requests with no User-Agent header — often scanners or attack tools:

grep '" "-"$' /var/log/nginx/access.log | wc -l

Response time analysis

If your log format includes $request_time, you can find slow requests. First check whether the field is present:

grep "request_time\|upstream_response_time" /etc/nginx/nginx.conf

If it's there (typically the last field), finds the 20 slowest requests:

awk '{print $NF, $7}' /var/log/nginx/access.log | sort -rn | head -20

Shows every request that took longer than 2 seconds:

awk '$NF > 2.0 {print $NF"s", $7, $9}' /var/log/nginx/access.log | sort -rn | head -20

Calculates average response time across all requests:

awk '{sum += $NF; count++} END {print "Avg:", sum/count, "sec"}' /var/log/nginx/access.log

Analyzing error.log

Error.log has a different structure — each entry includes a severity level.

Shows the last 50 lines of error.log:

tail -50 /var/log/nginx/error.log

A typical error.log line:

2026/04/30 14:23:01 [error] 1234#1234: *567 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.1, server: example.com, request: "GET /api/data HTTP/1.1", upstream: "http://127.0.0.1:8080/api/data"

Fields: date/time, level (debug, info, notice, warn, error, crit, alert, emerg), worker PID, connection number, message, request context.

Filtering by severity

Shows only critical and above (crit, alert, emerg):

grep -E '\[crit\]|\[alert\]|\[emerg\]' /var/log/nginx/error.log

Shows only error-level entries:

grep '\[error\]' /var/log/nginx/error.log | tail -50

Counts errors by severity level:

grep -oP '\[\w+\]' /var/log/nginx/error.log | sort | uniq -c | sort -rn

Reading common error patterns

Shows upstream connection failures — your backend is down or not responding:

grep "connect() failed\|upstream" /var/log/nginx/error.log | tail -20

When you see this, check whether your application (Node.js, PHP-FPM, Python) is actually running on the expected port.

Shows file permission errors:

grep "Permission denied\|No such file" /var/log/nginx/error.log | tail -20

Shows SSL/TLS errors — certificate problems, handshake failures:

grep -i "ssl\|tls\|certificate" /var/log/nginx/error.log | tail -20

Shows rate limiting and buffer overflow errors:

grep -i "limit\|too large\|buffer" /var/log/nginx/error.log | tail -20

Deduplicates error messages to show unique error types (strips varying IPs):

grep '\[error\]' /var/log/nginx/error.log | sed 's/client: [0-9.]*/client: X.X.X.X/' | sort | uniq -c | sort -rn | head -20

Working with Compressed Logs

After rotation, logs are compressed to .gz. Standard commands don't work on them — use the gz-aware variants.

Reads a compressed log without extracting it:

zcat /var/log/nginx/access.log.1.gz

Searches inside a compressed log:

zgrep "500" /var/log/nginx/access.log.1.gz

Pages through a compressed log:

zless /var/log/nginx/access.log.1.gz

Analyzes all logs from the past week including compressed archives:

zcat /var/log/nginx/access.log*.gz | awk '{print $9}' | sort | uniq -c | sort -rn

Combines current log with all archives in a single pipeline:

cat /var/log/nginx/access.log <(zcat /var/log/nginx/access.log.*.gz 2>/dev/null) | awk '{print $1}' | sort | uniq -c | sort -rn | head -20

Useful Scripts

Quick log summary

Prints a full breakdown of the current access.log in one shot:

echo "=== Nginx Log Summary ===" && \
echo "Total requests: $(wc -l < /var/log/nginx/access.log)" && \
echo "Unique IPs: $(awk '{print $1}' /var/log/nginx/access.log | sort -u | wc -l)" && \
echo "--- Status codes ---" && \
awk '{print $9}' /var/log/nginx/access.log | sort | uniq -c | sort -rn && \
echo "--- Top 5 IPs ---" && \
awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -5 && \
echo "--- Top 5 URLs ---" && \
awk '{print $7}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -5

Suspicious activity detection

Finds IPs hammering non-existent pages — often bots or scanners:

grep '" 404 ' /var/log/nginx/access.log | awk '{print $1}' | sort | uniq -c | sort -rn | awk '$1 > 500'

Looks for vulnerability scanning patterns in URLs:

grep -E "\.php\?|wp-admin|\.env|\.git|etc/passwd|union.*select|<script" /var/log/nginx/access.log | awk '{print $1}' | sort | uniq -c | sort -rn | head -20

Flags responses with abnormally large payloads:

awk '$10 > 10000000 {print $10/1024/1024 "MB", $1, $7}' /var/log/nginx/access.log | sort -rn | head -10

Real-time error spike alerting

Sends an email if more than 50 HTTP 500s happen in a single minute:

cat > /usr/local/bin/nginx-error-monitor.sh << 'EOF'
#!/bin/bash
THRESHOLD=50
EMAIL="admin@your-domain.com"
LOG="/var/log/nginx/access.log"

COUNT=$(awk -v time="$(date +'%d/%b/%Y:%H:%M')" '$4 ~ time && $9 == "500"' "$LOG" | wc -l)

if [ "$COUNT" -gt "$THRESHOLD" ]; then
    echo "ALERT: $COUNT HTTP 500 errors in last minute on $(hostname)" | \
    mail -s "Nginx 500 spike: $COUNT errors" "$EMAIL"
fi
EOF
chmod +x /usr/local/bin/nginx-error-monitor.sh
echo "* * * * * root /usr/local/bin/nginx-error-monitor.sh" >> /etc/crontab

GoAccess — Full Analytics in Your Terminal

grep and awk are great for one-off queries. For proper analytics — traffic trends, geographic breakdown, browser stats — GoAccess builds an interactive dashboard right in the terminal.

Installs GoAccess:

apt install goaccess -y    # Ubuntu/Debian
dnf install goaccess -y    # CentOS/AlmaLinux

Launches the interactive analyzer:

goaccess /var/log/nginx/access.log --log-format=COMBINED

You get a full TUI dashboard with panels for: requests, unique visitors, static files, response codes, remote hosts, browsers, operating systems, referrers, and geolocations.

Analyzes multiple files including compressed archives:

zcat /var/log/nginx/access.log.*.gz | goaccess - --log-format=COMBINED

Generates a static HTML report you can open in a browser:

goaccess /var/log/nginx/access.log --log-format=COMBINED -o /var/www/html/report.html

Generates a live HTML report that updates in real time via WebSocket:

goaccess /var/log/nginx/access.log --log-format=COMBINED -o /var/www/html/report.html --real-time-html

Open /report.html in a browser and watch the stats update as new requests come in.

Custom Log Formats

The default combined format is missing a few fields that matter in practice — response time, backend latency, hostname when you have multiple vhosts. A custom format adds them.

Add this inside the http block in /etc/nginx/nginx.conf:

http {
    log_format extended '$remote_addr - $remote_user [$time_local] '
                        '"$request" $status $body_bytes_sent '
                        '"$http_referer" "$http_user_agent" '
                        '$request_time $upstream_response_time '
                        '$host "$http_x_forwarded_for"';

    access_log /var/log/nginx/access.log extended;
}

What's new:

  • $request_time — total request processing time in seconds
  • $upstream_response_time — how long the backend took to respond
  • $host — the virtual hostname (essential when multiple sites share one log)
  • $http_x_forwarded_for — real client IP when Nginx sits behind a load balancer

Validates the config and reloads without downtime:

nginx -t && systemctl reload nginx

JSON log format

JSON logs are a natural fit if you're shipping to Elasticsearch, Loki, or Splunk — no parsing config needed.

Add to nginx.conf:

log_format json_combined escape=json
    '{'
        '"time":"$time_iso8601",'
        '"remote_addr":"$remote_addr",'
        '"method":"$request_method",'
        '"uri":"$uri",'
        '"args":"$args",'
        '"status":$status,'
        '"bytes_sent":$body_bytes_sent,'
        '"request_time":$request_time,'
        '"upstream_time":"$upstream_response_time",'
        '"referer":"$http_referer",'
        '"user_agent":"$http_user_agent",'
        '"host":"$host"'
    '}';

access_log /var/log/nginx/access.json json_combined;

Each line is now valid JSON:

{"time":"2026-04-30T14:23:01+03:00","remote_addr":"192.168.1.1","method":"GET","uri":"/api/users","args":"page=1","status":200,"bytes_sent":1543,"request_time":0.023,"upstream_time":"0.021","referer":"https://example.com","user_agent":"Mozilla/5.0","host":"api.example.com"}

Pulls all 500 errors from a JSON log using jq:

cat /var/log/nginx/access.json | jq 'select(.status >= 500)'

Finds the slowest requests from a JSON log, sorted:

cat /var/log/nginx/access.json | jq 'select(.request_time > 1) | {uri, request_time, status}' | jq -s 'sort_by(-.request_time) | .[0:20]'

Log Rotation

Without rotation, access.log will eventually eat your entire disk. Logrotate handles archiving and cleanup automatically.

Checks the current Nginx rotation config:

cat /etc/logrotate.d/nginx

The default config installed with Nginx looks like this:

/var/log/nginx/*.log {
    daily
    missingok
    rotate 52
    compress
    delaycompress
    notifempty
    create 0640 www-data adm
    sharedscripts
    postrotate
        if [ -f /var/run/nginx.pid ]; then
            kill -USR1 `cat /var/run/nginx.pid`
        fi
    endscript
}

The kill -USR1 is the right way to tell Nginx to reopen its log files after rotation. It doesn't restart — it just closes and reopens the file descriptors cleanly.

Force-runs rotation to test your config:

logrotate -f /etc/logrotate.d/nginx

Silencing Static File Logs

Images, CSS, and JS generate a flood of log entries that rarely tell you anything useful. Turn them off to keep logs lean and meaningful.

Add inside the server block in your Nginx config:

location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff|woff2|svg|webp)$ {
    access_log off;
    expires 30d;
    add_header Cache-Control "public, no-transform";
}

Validates and applies:

nginx -t && systemctl reload nginx

Static files stop appearing in access.log. The file grows slower, and what remains is actually worth reading.

Centralized Log Collection

Once you're running multiple servers, tailing logs individually doesn't scale. Centralized solutions pull everything into one place.

Filebeat + Elasticsearch

Filebeat is a lightweight agent that tails log files and ships them to Elasticsearch or Logstash.

Installs Filebeat:

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.13.0-amd64.deb
dpkg -i filebeat-8.13.0-amd64.deb

Minimal /etc/filebeat/filebeat.yml config for Nginx logs:

filebeat.inputs:
  - type: log
    enabled: true
    paths:
      - /var/log/nginx/access.log
    fields:
      log_type: nginx_access

  - type: log
    enabled: true
    paths:
      - /var/log/nginx/error.log
    fields:
      log_type: nginx_error

output.elasticsearch:
  hosts: ["elasticsearch-server:9200"]

Promtail + Grafana Loki

Loki is the lighter-weight alternative — it indexes labels rather than log content, which keeps storage costs low.

Promtail config for Nginx:

scrape_configs:
  - job_name: nginx
    static_configs:
      - targets:
          - localhost
        labels:
          job: nginx
          host: your-server
          __path__: /var/log/nginx/*.log

After setup, every server's logs are queryable in Grafana with filtering, search, and alerting in one interface.

Frequently Asked Questions

Why is access.log empty or not updating?

Three possibilities. First — logging is disabled with access_log off somewhere in the config. Check with grep -r "access_log off" /etc/nginx/. Second — Nginx is writing to a different file: grep -r "access_log" /etc/nginx/ | grep -v "off" shows you where. Third — permission issue: ls -la /var/log/nginx/ will reveal the file ownership.

How do I find what took down the site 10 minutes ago?

Check error.log for that time window: grep "30/Apr/2026:14:" /var/log/nginx/error.log. Simultaneously look for a spike in error codes: grep "30/Apr/2026:14:" /var/log/nginx/access.log | awk '{print $9}' | sort | uniq -c. A sudden spike in 502/504 means the backend died. A spike in 499 means clients were giving up waiting — the server was overloaded. A spike in 500 points to application errors.

How do I read logs when Nginx is behind Cloudflare or a load balancer?

The $remote_addr field will show the Cloudflare or load balancer IP, not the real visitor. The real IP comes through in the X-Forwarded-For or CF-Connecting-IP header. Add $http_x_forwarded_for to your log format and analyze that field instead. Better yet, configure set_real_ip_from with the Cloudflare IP ranges in your Nginx config — then $remote_addr will automatically show the real client IP.

What does a 499 status code mean?

499 is Nginx-specific. It means the client closed the connection before the server finished responding — the client timed out and gave up. Frequent 499s are a signal your server is responding too slowly. Look at the $request_time values alongside those 499 entries to see how long clients were waiting before bailing.

How do I figure out which virtual host is getting the most traffic?

If all vhosts log to a single file, add $host to your log format, then: awk '{print $host_field}' /var/log/nginx/access.log | sort | uniq -c | sort -rn. The cleaner approach is giving each server block its own access.log — then traffic per domain is immediately obvious without any analysis.

How do I clear log files without restarting Nginx?

Use truncate or an empty redirect: > /var/log/nginx/access.log. Nginx keeps writing to the same file descriptor, so logging continues uninterrupted — entries will be written from the start of the now-empty file. After clearing, send the reopen signal as a best practice: nginx -s reopen or kill -USR1 $(cat /var/run/nginx.pid).

Conclusion

Nginx logs are a complete picture of what's happening on your server. For day-to-day work, tail -f with grep covers real-time monitoring, and awk handles quick analysis. GoAccess fills the gap when you want visual dashboards without standing up a full ELK stack. For multi-server infrastructure, Filebeat + Elasticsearch or Promtail + Loki give you centralized visibility across your entire fleet.

Three things worth configuring right now: a custom log format with $request_time so you can track slow requests, per-vhost log files so traffic per domain is clear, and logrotate so the logs don't silently fill your disk.

A VPS on THE.Hosting with NVMe drives in RAID-10 handles high-volume log writes without I/O becoming a bottleneck. Support is available 24/7 via Telegram for any Nginx configuration questions.

Other articles

30.04.2026
3
Knowledge base / Instructions
Disk Full: What to Do When Linux Server Runs Out of Space
30.04.2026
7
Knowledge base / Instructions
Rsync: File Synchronization and Backup on Linux
30.04.2026
10
Knowledge base / Instructions - THE.Hosting
How to Transfer a Domain to THE.Hosting