It's 3 AM. Your server starts crawling. Logs stop writing. Database refuses new entries. Monitoring dashboard lights up red. First thought: "DDoS?Breach? Hardware failure?"
Then you open the console and see a simple but devastating message: "No space left on device".
Disk full. Completely. Now you have exactly a few minutes to find the problem, free up space, and restore the system before users start noticing issues.
Sound familiar? Then this article is for you. Today we'll break down how to quickly diagnose disk space problems in Linux, find space-eating files, and set up your system so you never wake up to such surprises again.
A full disk isn't just an inconvenience. It's a serious problem that can lead to:
According to system administrator statistics, about 15% of all production incidents are related to disk space issues. And the most frustrating part? Most of them could have been prevented with simple monitoring.
The df (disk free) command is your first tool for assessing the situation. It shows the big picture of disk space usage across all mounted filesystems.
The simplest way to check disk space:
df -h
The -h flag (human-readable) displays sizes in a format convenient for humans—gigabytes and megabytes instead of kilobyte blocks.
Example output:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 50G 42G 8.0G 85% /
/dev/sdb1 200G 180G 20G 90% /var/www
tmpfs 7.8G 120K 7.8G 1% /dev/shm
/dev/sda1 — first partition of first disk).Here's what you need to watch for:
Important nuance: Linux reserves 5% of space on the root partition for system processes and the root user. So even at 95%, you may already have write problems for regular users.
Check specific directory:
df -h /var/log
This command shows information about the filesystem where /var/log is located.
View filesystem type:
df -Th
Adds a column with FS type (ext4, xfs, btrfs, etc.). Useful when you have multiple different filesystem types.
Check inode usage:
df -ih
An inode is a special data structure that stores metadata about each file. You can exhaust inodes even if there's still space on disk. This happens when you have millions of small files (e.g., cache, logs, mailboxes).
Sign of the problem: disk shows free space, but system complains about its absence. In this case, check inodes.
The df command showed the disk is full. Great. But what exactly is taking up all the space? This is where the du (disk usage) command comes in.
Here's the command I use most often for quick diagnostics:
du -h --max-depth=1 / | sort -hr | head -20
Let's break down what's happening:
du -h --max-depth=1 / — shows size of all directories at depth 1 from root
sort -hr — sorts results by size (largest to smallest)
head -20 — shows only top-20 largest
Example output:
42G /var
15G /usr
8.5G /home
2.1G /opt
850M /tmp
Now you immediately see the problem is in /var. Go deeper:
du -h --max-depth=1 /var | sort -hr | head -20
And so on, until you find the specific directory or file eating all the space.
Sometimes the problem isn't the quantity of files, but one or two giant files. For example, a forgotten database dump or a bloated log file.
Find all files larger than 1 GB:
find / -type f -size +1G -exec ls -lh {} \; 2>/dev/null
Or even more convenient — find the 20 largest files in the system:
find / -type f -exec du -h {} \; 2>/dev/null | sort -rh | head -20
Warning: This command can take long on large filesystems. Better to first limit the search to a specific directory:
find /var -type f -size +1G -exec ls -lh {} \; 2>/dev/null
In 80% of cases, disk space problems are caused by logs. Especially if:
Check the size of log directory:
du -sh /var/log
And look at individual files:
du -h /var/log/* | sort -rh | head -10
Problem: One log file took up 30 GB out of 50.
Quick fix:
DON'T delete the file! This can break the application.
Instead, clear its contents:
/var/log/application.log
Or trim to last 1000 lines:
tail -n 1000 /var/log/application.log > /tmp/temp.log
mv /tmp/temp.log /var/log/application.log
Long-term solution: Configure logrotate for automatic rotation:
/etc/logrotate.d/application
/var/log/application.log {
daily
rotate 7
compress
delaycompress
missingok
notifempty
create 644 www-data www-data
}
Problem: Cache directory contains millions of 1-2 KB files each.
Diagnosis:
Count number of files:
find /var/cache/application -type f | wc -l
If there are more than 100,000 files — this is likely your problem.
Quick fix:
Delete files older than 7 days:
find /var/cache/application -type f -mtime +7 -delete
Or delete everything:
rm -rf /var/cache/application/*
Problem: Old backups taking up 150 GB.
Diagnosis:
du -sh /var/backups
ls -lht /var/backups | head
Solution: Delete old backups and set up automatic cleanup.
Keep only last 5 backups:
cd /var/backups
ls -t | tail -n +6 | xargs rm -f
Or delete backups older than 30 days:
find /var/backups -type f -name "*.sql.gz" -mtime +30 -delete
Problem: Docker accumulated gigabytes of unused images, containers, and volumes.
Diagnosis:
docker system df
Quick cleanup:
Remove stopped containers, unused networks and images
docker system prune -a
Remove everything including volumes (CAREFUL!):
docker system prune -a --volumes
Problem: You deleted a huge log file, but space didn't free up.
Reason: If a process holds the file open, space won't be freed until process restart.
Solution: Find process holding the deleted file:
lsof | grep deleted
And restart that process.
Problem: Deleted everything in /tmp, broke running applications.
Proper way: Delete only old files:
find /tmp -type f -atime +10 -delete
Even at df readings of 90%, regular users may have no free space due to 5% reserve for root.
Solution: Reduce reserve on non-system partitions.
Reduce reserve from 5% to 1%:
tune2fs -m 1 /dev/sdb1
Problem: find / command loads disk and slows system.
Solution: Use ionice to lower priority:
ionice -c3 find / -type f -size +1G
ncdu is an interactive disk space analyzer with a convenient interface.
Installation:
apt install ncdu # Debian/Ubuntu
yum install ncdu # CentOS/RHEL
Usage:
ncdu /var
You get an interactive directory list with arrow key navigation and ability to delete files right from the interface.
Combining Commands for Detailed Analysis
Find largest directories in /var with details:
du -ah /var | sort -rh | head -20
Find files modified in last 24 hours and larger than 100 MB:
find /var -type f -mtime -1 -size +100M -exec ls -lh {} \
Show top-10 directories by file count:
for dir in /var/*/; do
echo -n "$dir: ";
find "$dir" -type f | wc -l;
done | sort -t: -k2 -rn | head -10
Watch which files are changing right now:
watch -n 1 'df -h; echo "---"; du -sh /var/*'
Or install iotop for disk activity monitoring:
apt install iotop
iotop -o # Show only active processes
Make sure logrotate is configured properly.
Check configuration:
cat /etc/logrotate.conf
ls /etc/logrotate.d/
Test rotation in dry-run mode:
logrotate -d /etc/logrotate.conf
Add daily cleanup to cron.
Clean files older than 7 days:
0 3 * * * find /tmp -type f -atime +7 -delete
If using Docker, set up automatic cleanup.
Add weekly cleanup to cron:
0 2 * * 0 docker system prune -f
On shared servers, set quotas so one user can't fill entire disk.
Install quotas (for ext4):
apt install quota
Enable quotas in /etc/fstab.
Configure limits with edquota.
Disk space problems are one of the most common causes of failures in Linux systems. But with the right approach, they're easy to prevent:
df -h.Remember: it's much easier to spend 10 minutes setting up monitoring than an hour on emergency disk cleanup at 3 AM.
Want hosting where you don't have to worry about disk space?
THE.Hosting offers VPS and dedicated servers with NVMe RAID 10 and ability to quickly scale. Out of space? Increase disk in a couple clicks without system downtime.