"No space left on device" — one of those errors that appears at the worst possible moment. The site stops accepting uploads, the database hangs, logs stop writing, deploys fail with cryptic errors. One cause: the disk is completely full.
No need to panic. The situation resolves in 10-30 minutes even on a heavily loaded server. The key is to act methodically: confirm the problem is disk space, find the culprit, then clean. This guide walks through the entire process — from the first diagnostic command to setting up monitoring so it doesn't happen again.
Confirm the Disk Is Actually Full
Before doing anything, check disk state.
Shows used and free space on all mounted partitions:
df -h
The -h flag outputs sizes in human-readable format (GB, MB). Look at the Use% column — if it shows 100% or close to it, the problem is confirmed.
Typical output when disk is full:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 50G 50G 0 100% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
If Avail is 0 and Use% is 100% — confirmed, work from here.
There's another situation that mimics a full disk while space is available — running out of inodes. Inodes are filesystem records about files: you can have gigabytes free, but if inodes are exhausted, no new files can be created.
Checks inode usage:
df -i
If IUse% shows 100%, the problem is inodes, not space. The fix is the same — find and delete files — but look for vast numbers of tiny files (PHP session caches, message queues, temp files) rather than large ones.
Find What's Taking Up Space
Disk is full — need to know who's responsible. Narrow down from root to the specific directory.
See How Much Each Top-Level Directory Takes
Shows size of each directory at root, sorted descending:
du -sh /* 2>/dev/null | sort -rh | head -20
Flags: -s summarizes directory contents instead of listing line by line, -h is human-readable format. 2>/dev/null suppresses permission errors on system directories.
Look at the top lines — the culprit is usually obvious. /var frequently takes the most space on servers due to logs and databases.
Drill Into the Suspicious Directory
Shows what's inside /var, sorted by size:
du -sh /var/* 2>/dev/null | sort -rh | head -20
Repeat for each suspicious directory until you reach specific files.
Find the Largest Files Across the Whole Disk
Searches for files larger than 100 MB across the entire filesystem:
find / -type f -size +100M -exec ls -lh {} \; 2>/dev/null | sort -k5 -rh | head -20
Takes slightly longer than du but gives a list of specific files with paths and sizes. Often immediately reveals the cause: a 20 GB log file, an old database dump, a forgotten archive.
Alternative — Interactive ncdu Utility
If ncdu is available, it's more convenient for interactive exploration:
apt install ncdu -y # Debian/Ubuntu
dnf install ncdu -y # CentOS/AlmaLinux
Launches an interactive disk space browser:
ncdu /
Navigate directories with arrow keys, sizes are shown instantly, delete files directly from the interface with d.
Free Up Space — Common Culprits
Logs
Logs are the most frequent cause of full disks on production servers. Nginx, Apache, MySQL, applications — all write logs. Without proper rotation, 50-100 GB accumulates in a month.
Shows size of system log files:
du -sh /var/log/* | sort -rh | head -20
Checks size of a specific log file:
ls -lh /var/log/nginx/access.log
For actively written files, you can't just delete them — the service continues writing to the deleted file descriptor and space won't be freed. The correct approach is to empty the file content without deleting it.
Empties log file content without deleting it:
> /var/log/nginx/access.log
Or:
truncate -s 0 /var/log/nginx/access.log
Forces rotation of all logs via logrotate:
logrotate -f /etc/logrotate.conf
Deletes old compressed logs (files older than 30 days):
find /var/log -name "*.gz" -mtime +30 -delete
systemd Journal (journald)
Journald accumulates logs from all systemd services and can quietly grow to several gigabytes.
Shows how much space the journal occupies:
journalctl --disk-usage
Cleans the journal keeping only the last 500 MB:
journalctl --vacuum-size=500M
Removes entries older than 2 weeks:
journalctl --vacuum-time=2weeks
Package Manager Caches
APT and DNF store downloaded packages that are no longer needed after installation.
Clears APT cache (Debian/Ubuntu):
apt clean
Removes only outdated package cache:
apt autoclean
Removes packages no longer needed as dependencies:
apt autoremove -y
Clears DNF cache (CentOS/AlmaLinux/Rocky):
dnf clean all
Docker — Images, Containers, Volumes
Docker quietly accumulates gigabytes of unused images, stopped containers, and anonymous volumes.
Shows how much space Docker is using:
docker system df
Removes everything unused — stopped containers, untagged images, networks, build cache:
docker system prune -a
The -a flag removes images with no running containers too. Without the flag, only "dangling" images (untagged) are removed.
Removes unused volumes only (caution — data is permanently deleted):
docker volume prune
Removes Docker build cache:
docker builder prune -a
Old Linux Kernels
After system updates, old kernels remain on disk. The /boot partition on Ubuntu is often small and fills up quickly.
Shows which kernels are installed:
dpkg --list | grep linux-image
The current kernel is what uname -r outputs. Everything else can be removed.
Removes old kernels automatically keeping the current one (Ubuntu/Debian only):
apt autoremove --purge -y
On CentOS/AlmaLinux, keeps only the last 2 kernels:
dnf remove $(dnf repoquery --installonly --latest-limit=-2 -q)
Temporary Files
Shows size of system temp directories:
du -sh /tmp /var/tmp
Deletes files from /tmp older than 7 days:
find /tmp -type f -mtime +7 -delete
Deletes files from /var/tmp older than 30 days:
find /var/tmp -type f -mtime +30 -delete
Dumps and Core Files
When processes crash, the system may save core dumps — full memory snapshots of the process. A single file can weigh several gigabytes.
Finds core dump files:
find / \( -name "core" -o -name "core.*" \) 2>/dev/null | head -20
Shows where system crash dumps are stored:
ls -lh /var/crash/ 2>/dev/null
ls -lh /var/lib/systemd/coredump/ 2>/dev/null
Removes all saved core dumps via systemd:
coredumpctl clean
Specific Situations
Database Taking Too Much Space
MySQL and PostgreSQL leave a lot of overhead — deleted records don't immediately free disk space.
Shows the size of each database in MySQL:
mysql -e "SELECT table_schema AS 'Database', ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS 'Size (MB)' FROM information_schema.tables GROUP BY table_schema ORDER BY 2 DESC;"
Shows the size of each database in PostgreSQL:
psql -c "SELECT pg_database.datname, pg_size_pretty(pg_database_size(pg_database.datname)) AS size FROM pg_database ORDER BY pg_database_size(pg_database.datname) DESC;"
Defragments a table and returns space to the OS (MySQL, run inside mysql console):
OPTIMIZE TABLE table_name;
Defragments all PostgreSQL databases (warning: --full locks tables during execution — run during low-traffic hours):
vacuumdb --all --full
Files Deleted but Space Not Freed
A classic trap — deleted a large file, but df -h still shows the disk full. Reason: the file is removed from the directory, but a process holds it open and the kernel won't release the blocks.
Finds processes holding deleted files open:
lsof | grep deleted
The output shows the process name, PID, and size of the deleted file. Solution — restart that process. It will close the file descriptor, and the kernel will free the blocks.
Restarts the service holding the deleted file (replace nginx with the relevant service):
systemctl restart nginx
If restarting the process isn't possible, empty the file through /proc without restarting. Find the PID and file descriptor from lsof output, then:
> /proc/PID/fd/DESCRIPTOR_NUMBER
A Specific Partition Is Full (/var or /home)
If you have multiple partitions and only one is full, you can temporarily create a symlink from the full partition to one with available space.
Example: /var/log is full but /home has space.
Moves logs to another partition:
mv /var/log /home/var_log_backup
Creates a symbolic link back:
ln -s /home/var_log_backup /var/log
This is a temporary workaround. The proper fix is to expand the partition or configure rotation to prevent overflow.
Verify the Result
After each cleanup step, check that space was freed:
df -h
Shows current state. If space is freed, services should resume normal operation automatically. If a service hung due to the full disk, restart it:
systemctl restart service-name
Set Up Monitoring to Prevent Recurrence
Finding and cleaning is half the job. The important part is setting up alerts so you know about low disk space before it runs out.
Simple Email Alert Script
Creates the monitoring script:
cat > /usr/local/bin/disk-monitor.sh << 'EOF'
#!/bin/bash
THRESHOLD=85
EMAIL="admin@your-domain.com"
df -h | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{print $5 " " $1}' | while read output; do
USAGE=$(echo $output | awk '{print $1}' | cut -d'%' -f1)
PARTITION=$(echo $output | awk '{print $2}')
if [ "$USAGE" -ge "$THRESHOLD" ]; then
echo "Disk $PARTITION is ${USAGE}% full on $(hostname)" | mail -s "DISK ALERT: $PARTITION ${USAGE}%" $EMAIL
fi
done
EOF
chmod +x /usr/local/bin/disk-monitor.sh
Adds a check to cron every 30 minutes:
echo "*/30 * * * * root /usr/local/bin/disk-monitor.sh" >> /etc/crontab
Configure Automatic Log Rotation
Prevents uncontrolled Nginx log growth. Creates a rotation config:
cat > /etc/logrotate.d/nginx-custom << 'EOF'
/var/log/nginx/*.log {
daily
rotate 14
compress
delaycompress
missingok
notifempty
sharedscripts
postrotate
nginx -s reopen
endscript
}
EOF
Parameters: daily rotates once a day, rotate 14 keeps 14 archives, compress gzips old logs, delaycompress doesn't compress yesterday's log immediately (needed for proper descriptor switching).
Permanently limits journald journal size — edits the config:
sed -i 's/#SystemMaxUse=/SystemMaxUse=2G/' /etc/systemd/journald.conf
sed -i 's/#SystemMaxFileSize=/SystemMaxFileSize=200M/' /etc/systemd/journald.conf
systemctl restart systemd-journald
After this, journald will never grow beyond 2 GB.
Disables core dump storage by the system:
sed -i 's/#Storage=external/Storage=none/' /etc/systemd/coredump.conf
systemctl daemon-reload
When You Need to Expand the Disk
If there's still little free space after all the cleanup — it's time to expand. A temporary fix is temporary: if data keeps growing, the disk will fill again.
Shows current disk layout:
lsblk
On a VPS, expand the disk through the hosting control panel — add volume to the existing partition or attach an additional disk. After that, expand the filesystem.
Installs the growpart utility if not present:
apt install cloud-guest-utils -y # Ubuntu/Debian
dnf install cloud-utils-growpart -y # CentOS/AlmaLinux
Checks for unallocated space after expanding in the control panel:
fdisk -l /dev/sda
Expands the partition to all available unallocated space (for GPT layout):
growpart /dev/sda 1
Expands ext4 filesystem to match partition size:
resize2fs /dev/sda1
For xfs:
xfs_growfs /
After these commands, df -h shows the larger partition without rebooting.
Frequently Asked Questions
Disk is full but I can't find any large files — why?
Two possible causes. First — files were deleted but processes hold them open. Check with lsof | grep deleted and restart the offending service. Second — inodes are exhausted, not space. Check with df -i: if IUse% is 100%, look for directories with enormous numbers of tiny files (PHP session caches, message queues, application temp files).
Which directories should I clean first?
In order of priority: /var/log (logs), /var/lib/docker (if using Docker), /tmp and /var/tmp (temp files), /var/cache/apt or /var/cache/dnf (package caches), /var/lib/systemd/coredump (core dumps). These locations give maximum recovery with minimal risk.
Can I delete files in /proc and /sys?
No. /proc and /sys are virtual filesystems that don't occupy real disk space. They expose kernel and process state as files. There's nothing to delete there and no reason to try.
How do I find which process is actively writing to disk right now?
Use iotop — shows I/O activity per process in real time. Install with apt install iotop or dnf install iotop. Run iotop -o to see only processes with active I/O. The process with the highest DISK WRITE value is the one writing most.
After cleanup the site still isn't working. What now?
Some services don't recover automatically after the disk becomes available again. Restart them in sequence: web server (systemctl restart nginx or apache2), database (systemctl restart mysql or postgresql), application. Check each service's logs after restart — they'll show if other problems exist.
How do I prevent this from happening again?
Three steps: configure logrotate for every service that writes logs, limit journald size in /etc/systemd/journald.conf with the SystemMaxUse parameter, add disk monitoring with email or Telegram alerts when usage exceeds 80-85%. This is enough to get warnings several days before overflow.
Conclusion
A full disk is a solvable problem. The sequence: df -h to confirm, du -sh /* | sort -rh to find the culprit, log and cache cleanup to quickly free space, monitoring and rotation setup to prevent recurrence.
Most cases resolve with three commands — clearing the package cache, cleaning the journald journal, and removing old Docker images. If that's not enough, lsof | grep deleted and ncdu will track down the non-obvious cause.
A VPS on THE.Hosting with NVMe drives in RAID-10 provides I/O performance headroom even when the disk is close to full. When needed, disk volume expands through the control panel without stopping the server. Support is available 24/7 via Telegram — they'll help with both diagnostics and partition expansion.