Methods of payment Abuse

How to Quickly Check Disk Usage in Linux: From Panic to Control in 5 Minutes

  • Main
  • Knowledge base
  • How to Quickly Check Disk Usage in Linux: From Panic to Control in 5 Minutes
24.10.2025, 20:08

It's 3 AM. Your server starts crawling. Logs stop writing. Database refuses new entries. Monitoring dashboard lights up red. First thought: "DDoS?Breach? Hardware failure?"

Then you open the console and see a simple but devastating message: "No space left on device".

Disk full. Completely. Now you have exactly a few minutes to find the problem, free up space, and restore the system before users start noticing issues.

Sound familiar? Then this article is for you. Today we'll break down how to quickly diagnose disk space problems in Linux, find space-eating files, and set up your system so you never wake up to such surprises again.

Why a Full Disk is a Catastrophe

A full disk isn't just an inconvenience. It's a serious problem that can lead to:

  • Database shutdown: PostgreSQL, MySQL, and other DBMS can't write new data without free space.
  • Log loss: The system stops recording critical diagnostic information exactly when you need it most.
  • Update failure: Package managers require temporary space to unpack and install updates.
  • Application crashes: Many programs create temporary files during operation and fail if they can't.
  • Data corruption: In some cases, sudden lack of space can lead to filesystem corruption.

According to system administrator statistics, about 15% of all production incidents are related to disk space issues. And the most frustrating part? Most of them could have been prevented with simple monitoring.

df: Quick Diagnostics in Seconds

The df (disk free) command is your first tool for assessing the situation. It shows the big picture of disk space usage across all mounted filesystems.

Basic Usage

The simplest way to check disk space:

df -h

The -h flag (human-readable) displays sizes in a format convenient for humans—gigabytes and megabytes instead of kilobyte blocks.

Example output:

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        50G   42G   8.0G  85% /
/dev/sdb1       200G  180G   20G   90% /var/www
tmpfs           7.8G  120K  7.8G   1% /dev/shm

What Do These Columns Mean?

  • Filesystem: Device name (e.g., /dev/sda1 — first partition of first disk).
  • Size: Total filesystem size.
  • Used: Space used.
  • Avail: Available free space.
  • Use%: Percentage filled — this is the most important number.
  • Mounted on: Mount point — where this filesystem is accessible in the system.

Critical Thresholds

Here's what you need to watch for:

  • 85-90% — Time to start planning cleanup or expansion
  • 90-95% — Critical level, action needed urgently
  • 95%+ — Emergency situation, system may start failing at any moment

Important nuance: Linux reserves 5% of space on the root partition for system processes and the root user. So even at 95%, you may already have write problems for regular users.

Useful df Options

Check specific directory:

df -h /var/log

This command shows information about the filesystem where /var/log is located.

View filesystem type:

df -Th

Adds a column with FS type (ext4, xfs, btrfs, etc.). Useful when you have multiple different filesystem types.

Check inode usage:

df -ih

An inode is a special data structure that stores metadata about each file. You can exhaust inodes even if there's still space on disk. This happens when you have millions of small files (e.g., cache, logs, mailboxes).
Sign of the problem: disk shows free space, but system complains about its absence. In this case, check inodes.

du: Detective Work to Find the Problem

The df command showed the disk is full. Great. But what exactly is taking up all the space? This is where the du (disk usage) command comes in.

Finding the Biggest Directories

Here's the command I use most often for quick diagnostics:

du -h --max-depth=1 / | sort -hr | head -20

Let's break down what's happening:

du -h --max-depth=1 / — shows size of all directories at depth 1 from root

sort -hr — sorts results by size (largest to smallest)

head -20 — shows only top-20 largest

Example output:

42G    /var
15G    /usr
8.5G   /home
2.1G   /opt
850M   /tmp

Now you immediately see the problem is in /var. Go deeper:

du -h --max-depth=1 /var | sort -hr | head -20

And so on, until you find the specific directory or file eating all the space.

Finding Large Files

Sometimes the problem isn't the quantity of files, but one or two giant files. For example, a forgotten database dump or a bloated log file.
Find all files larger than 1 GB:

find / -type f -size +1G -exec ls -lh {} \; 2>/dev/null

Or even more convenient — find the 20 largest files in the system:

find / -type f -exec du -h {} \; 2>/dev/null | sort -rh | head -20

Warning: This command can take long on large filesystems. Better to first limit the search to a specific directory:

find /var -type f -size +1G -exec ls -lh {} \; 2>/dev/null

Why Logs are the Main Enemy of Free Space

In 80% of cases, disk space problems are caused by logs. Especially if:

  • Log rotation is disabled.
  • Application writes logs in debug mode on production.
  • There's an error in code generating millions of repeating entries.

Check the size of log directory:

du -sh /var/log

And look at individual files:

du -h /var/log/* | sort -rh | head -10

Practical Scenarios: What to Do in Different Situations

Scenario 1: Bloated Log File

Problem: One log file took up 30 GB out of 50.

Quick fix:

DON'T delete the file! This can break the application.

Instead, clear its contents:

/var/log/application.log

Or trim to last 1000 lines:

tail -n 1000 /var/log/application.log > /tmp/temp.log
mv /tmp/temp.log /var/log/application.log

Long-term solution: Configure logrotate for automatic rotation:

/etc/logrotate.d/application
/var/log/application.log {
    daily
    rotate 7
    compress
    delaycompress
    missingok
    notifempty
    create 644 www-data www-data
}

Scenario 2: Millions of Small Files

Problem: Cache directory contains millions of 1-2 KB files each.

Diagnosis:

Count number of files:

find /var/cache/application -type f | wc -l

If there are more than 100,000 files — this is likely your problem.

Quick fix:

Delete files older than 7 days:

find /var/cache/application -type f -mtime +7 -delete

Or delete everything:

rm -rf /var/cache/application/*

Scenario 3: Forgotten Backups

Problem: Old backups taking up 150 GB.

Diagnosis:

du -sh /var/backups
ls -lht /var/backups | head

Solution: Delete old backups and set up automatic cleanup.

Keep only last 5 backups:

cd /var/backups
ls -t | tail -n +6 | xargs rm -f

Or delete backups older than 30 days:

find /var/backups -type f -name "*.sql.gz" -mtime +30 -delete

Scenario 4: Docker Devouring Space

Problem: Docker accumulated gigabytes of unused images, containers, and volumes.

Diagnosis:

docker system df

Quick cleanup:

Remove stopped containers, unused networks and images

docker system prune -a

Remove everything including volumes (CAREFUL!):

docker system prune -a --volumes

Common Mistakes and How to Avoid Them

Mistake 1: Deleting File Still in Use

Problem: You deleted a huge log file, but space didn't free up.

Reason: If a process holds the file open, space won't be freed until process restart.

Solution: Find process holding the deleted file:

lsof | grep deleted

And restart that process.

Mistake 2: Cleaning /tmp Without Checking

Problem: Deleted everything in /tmp, broke running applications.

Proper way: Delete only old files:

find /tmp -type f -atime +10 -delete

Mistake 3: Ignoring Reserved Space

Even at df readings of 90%, regular users may have no free space due to 5% reserve for root.

Solution: Reduce reserve on non-system partitions.

Reduce reserve from 5% to 1%:

tune2fs -m 1 /dev/sdb1

Mistake 4: Running Search on Root Partition Under Load

Problem: find / command loads disk and slows system.

Solution: Use ionice to lower priority:

ionice -c3 find / -type f -size +1G

Advanced Techniques

Quick Search with ncdu

ncdu is an interactive disk space analyzer with a convenient interface.

Installation:

apt install ncdu  # Debian/Ubuntu
yum install ncdu  # CentOS/RHEL

Usage:

ncdu /var

You get an interactive directory list with arrow key navigation and ability to delete files right from the interface.

Combining Commands for Detailed Analysis

Find largest directories in /var with details:

du -ah /var | sort -rh | head -20

Find files modified in last 24 hours and larger than 100 MB:

find /var -type f -mtime -1 -size +100M -exec ls -lh {} \

Show top-10 directories by file count:

for dir in /var/*/; do 
  echo -n "$dir: "; 
  find "$dir" -type f | wc -l; 
done | sort -t: -k2 -rn | head -10

Real-Time Monitoring

Watch which files are changing right now:

watch -n 1 'df -h; echo "---"; du -sh /var/*'

Or install iotop for disk activity monitoring:

apt install iotop
iotop -o  # Show only active processes

How to Know It's Time to Clean: 7 Signs

  1. System slowdown: System became sluggish without apparent reason.
  2. Errors in logs: Messages like "No space left" or "Disk quota exceeded".
  3. Database problems: MySQL/PostgreSQL refuses to accept new records.
  4. Can't install updates: Package manager complains about lack of space.
  5. File creation errors: Applications crash when trying to save data.
  6. Usage above 85%: Even if everything works, this is a signal to act.
  7. Sharp usage growth: If usage grew 20%+ in a week — there's a leak somewhere.

Prevention: Set It and Forget It

1. Log Rotation

Make sure logrotate is configured properly.

Check configuration:

cat /etc/logrotate.conf
ls /etc/logrotate.d/

Test rotation in dry-run mode:

logrotate -d /etc/logrotate.conf

2. Automatic /tmp Cleanup

Add daily cleanup to cron.

Clean files older than 7 days:

0 3 * * * find /tmp -type f -atime +7 -delete

3. Docker Monitoring

If using Docker, set up automatic cleanup.

Add weekly cleanup to cron:

0 2 * * 0 docker system prune -f

4. User Quotas

On shared servers, set quotas so one user can't fill entire disk.

Install quotas (for ext4):

apt install quota

Enable quotas in /etc/fstab.

Configure limits with edquota.

Conclusion: Keep Your Disk Under Control

Disk space problems are one of the most common causes of failures in Linux systems. But with the right approach, they're easy to prevent:

  1. Regularly monitor disk usage with df -h.
  2. Use du for quick search of problematic directories.
  3. Automate monitoring with scripts and notifications.
  4. Configure rotation of logs and automatic cleanup of temporary files.
  5. Act proactively: don't wait until disk fills to 95%.

Remember: it's much easier to spend 10 minutes setting up monitoring than an hour on emergency disk cleanup at 3 AM.

Want hosting where you don't have to worry about disk space?

THE.Hosting offers VPS and dedicated servers with NVMe RAID 10 and ability to quickly scale. Out of space? Increase disk in a couple clicks without system downtime.