Troubleshooting

Resolving Disk Space Issues

Updated December 2025
15 min read
11.3K views

Running out of disk space can cause serious issues: websites going down, databases failing to write, services crashing, and system instability. This comprehensive guide helps you diagnose disk space problems, identify what's consuming storage, and implement both immediate fixes and long-term prevention strategies.

Critical Warning:

When disk space reaches 100%, your server can become unstable or unresponsive. Do not delete files randomly - follow this guide to safely free up space without breaking your system.

1

Symptoms of Disk Space Issues

Recognizing the warning signs early can prevent serious problems:

Server Symptoms

  • • Websites return 500/503 errors
  • • SSH connection becomes slow
  • • Services fail to start/restart
  • • System becomes unresponsive
  • • Can't create new files

Application Symptoms

  • • Database errors ("disk full")
  • • Unable to upload files
  • • Email delivery failures
  • • Failed package installations
  • • Backup failures

Common Error Messages

No space left on device
write failed, filesystem is full
cannot create temp file for here-document: No space left on device
MySQL: Error writing file (Errcode: 28 - No space left on device)
Quick Tip: If you're seeing these symptoms, act quickly. Most issues can be resolved by clearing logs and temporary files within minutes.
2

Check Current Disk Usage

First, determine how much space you're actually using:

Basic Disk Usage Check

# Check overall disk usage
df -h

# Check specific partition
df -h /

# Human-readable with filesystem type
df -hT

Understanding the Output

Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        50G   45G   5G   90% /
/dev/vda2       100G   80G  20G   80% /var
0-70%: Healthy - plenty of room
70-85%: Warning - start monitoring
85-95%: Critical - take action now
95-100%: Emergency - immediate cleanup required

Check Inode Usage

Sometimes you run out of inodes (file count limit) even with space available:

# Check inode usage
df -i

# If IUse% is near 100%, you have too many files
Inode Exhaustion: If inodes are at 100% but disk space is available, you have too many small files (often logs or temp files). Solution: delete small files, not large ones.
3

Finding Large Files and Directories

Identify what's consuming your disk space:

Find Top Space-Consuming Directories

# Check root directory usage
sudo du -h --max-depth=1 / | sort -hr | head -20

# Check specific directory
sudo du -h --max-depth=1 /var | sort -hr

# More detailed with percentages
sudo du -hx --max-depth=2 / | sort -hr | head -30

Find Largest Files

# Top 20 largest files on the system
sudo find / -type f -printf '%s %p\n' 2>/dev/null | sort -nr | head -20 | awk '{print $1/1024/1024 "MB", $2}'

# Largest files in /var
sudo find /var -type f -size +100M -exec ls -lh {} \; 2>/dev/null | awk '{print $5, $9}'

# Files larger than 1GB
sudo find / -type f -size +1G 2>/dev/null

Interactive Disk Usage Tool

Use ncdu for an easier, interactive experience:

# Install ncdu
sudo apt install ncdu -y    # Debian/Ubuntu
sudo yum install ncdu -y    # CentOS/RHEL

# Analyze root directory
sudo ncdu /

# Analyze specific directory
sudo ncdu /var/log

# Navigation:
# - Arrow keys: navigate
# - Enter: open directory
# - d: delete file/directory
# - q: quit
Pro Tip: ncdu is the easiest way to find space hogs. Install it first, then run sudo ncdu / to visually explore your filesystem.
4

Common Disk Space Culprits

These are the usual suspects that consume excessive disk space:

1. Log Files (Most Common)

Location: /var/log/

# Check log directory size
sudo du -sh /var/log/*

# Common space hogs:
# - /var/log/nginx/access.log
# - /var/log/apache2/access.log
# - /var/log/syslog
# - /var/log/mysql/mysql.log
# - /var/log/journal/

2. Package Manager Cache

Location: /var/cache/apt/ or /var/cache/yum/

# Check cache size
sudo du -sh /var/cache/apt/archives/  # Debian/Ubuntu
sudo du -sh /var/cache/yum/           # CentOS/RHEL

3. Temporary Files

Location: /tmp/ and /var/tmp/

# Check temp directories
sudo du -sh /tmp /var/tmp

4. Database Files

Location: /var/lib/mysql/ or /var/lib/postgresql/

# Check database size
sudo du -sh /var/lib/mysql/
sudo du -sh /var/lib/postgresql/

# Check MySQL database sizes
mysql -u root -p -e "SELECT table_schema AS 'Database',
ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS 'Size (MB)'
FROM information_schema.tables GROUP BY table_schema;"

5. Backup Files

Location: /var/backups/ or custom backup directory

# Find backup files
sudo find / -name "*.tar.gz" -o -name "*.zip" -o -name "*.bak" 2>/dev/null | xargs -I {} ls -lh {}

# Check backup directory
sudo du -sh /var/backups/

6. Old Kernels

Location: /boot/

# Check boot partition
df -h /boot

# List installed kernels
dpkg --list | grep linux-image

# Current kernel
uname -r

7. Deleted But Open Files

Files deleted but still held open by processes

# Find deleted files still using space
sudo lsof +L1

# Or with better formatting
sudo lsof +L1 | awk '$5 == "0" {sum+=$7} END {print "Total:", sum/1024/1024, "MB"}'
Typical Breakdown: On most servers, 60-70% of space issues come from log files, 15-20% from package caches, and 10-15% from temporary files or old backups.
5

Quick Cleanup Methods

CAUTION: Always backup important data before deleting files. Never delete files you don't understand. When in doubt, move files to a backup location first.

1. Clear Package Manager Cache

# Debian/Ubuntu
sudo apt clean
sudo apt autoclean
sudo apt autoremove

# CentOS/RHEL
sudo yum clean all

# Shows space that will be freed
apt-get clean -s

✓ Safe - Can reclaim 200MB-2GB instantly

2. Clear Temporary Files

# Clear /tmp (careful with running applications)
sudo rm -rf /tmp/*

# Clear old temp files (older than 7 days)
sudo find /tmp -type f -atime +7 -delete
sudo find /var/tmp -type f -atime +7 -delete

✓ Usually safe - Can reclaim 100MB-1GB

3. Rotate/Truncate Large Log Files

# Truncate (empty) a log file without deleting it
sudo truncate -s 0 /var/log/nginx/access.log
sudo truncate -s 0 /var/log/syslog

# Compress old logs
sudo gzip /var/log/nginx/access.log.1
sudo gzip /var/log/apache2/access.log.1

# Delete old compressed logs
sudo find /var/log -name "*.gz" -mtime +30 -delete

# Clear journal logs older than 3 days
sudo journalctl --vacuum-time=3d

# Limit journal size to 100MB
sudo journalctl --vacuum-size=100M

✓ Safe if logs already archived - Can reclaim 1GB-10GB

4. Remove Old Kernels

# List all kernels
dpkg --list | grep linux-image

# Remove old kernels (keep current + 1 previous)
sudo apt autoremove --purge

# Manual removal (NEVER delete the current kernel!)
# First check current kernel:
uname -r

# Then remove old ones (example):
sudo apt remove linux-image-5.4.0-42-generic

⚠ Moderate risk - Can reclaim 200MB-1GB per kernel

5. Handle Deleted-But-Open Files

# Find processes holding deleted files
sudo lsof +L1

# Restart the service holding the file (example with nginx)
sudo systemctl restart nginx

# Or restart the specific process
sudo kill -HUP [PID]

✓ Safe - Releases space immediately

6. Clean Docker (if installed)

# Remove unused Docker data
docker system prune -a

# Remove all stopped containers
docker container prune

# Remove unused images
docker image prune -a

# Remove unused volumes
docker volume prune

⚠ Moderate risk - Can reclaim GBs of space

Quick Recovery Script: Run these commands in sequence for immediate relief:
sudo apt clean && \
sudo journalctl --vacuum-time=3d && \
sudo find /tmp -type f -atime +7 -delete && \
sudo find /var/log -name "*.gz" -mtime +30 -delete
6

Managing Log Files

Log files are the #1 cause of disk space issues. Proper log management is essential:

Configure Logrotate

Logrotate automatically rotates, compresses, and deletes old logs:

# Check if logrotate is installed
which logrotate

# View nginx log rotation config
cat /etc/logrotate.d/nginx

# Example logrotate configuration
sudo nano /etc/logrotate.d/custom-app

Example Logrotate Configuration

/var/log/myapp/*.log {
    daily                 # Rotate daily
    rotate 7              # Keep 7 days of logs
    compress              # Compress old logs
    delaycompress         # Compress on 2nd rotation
    missingok             # Don't error if log missing
    notifempty            # Don't rotate empty logs
    create 0640 www-data www-data
    sharedscripts
    postrotate
        systemctl reload nginx
    endscript
}

Aggressive Log Cleanup

For emergencies when you need space immediately:

# Find and delete large log files
sudo find /var/log -type f -size +100M -delete

# Keep only last 3 days of logs
sudo find /var/log -type f -name "*.log" -mtime +3 -delete

# Truncate all .log files (use with caution!)
sudo find /var/log -type f -name "*.log" -exec truncate -s 0 {} \;

Disable Verbose Logging

Reduce log verbosity to prevent future issues:

# Nginx - reduce access log verbosity
# Edit /etc/nginx/nginx.conf
access_log /var/log/nginx/access.log combined;
# Change to:
access_log /var/log/nginx/access.log combined buffer=32k;

# Or disable access logging entirely (not recommended for production)
access_log off;

# MySQL slow query log
# Edit /etc/mysql/mysql.conf.d/mysqld.cnf
slow_query_log = 0  # Disable
# Or
long_query_time = 10  # Only log queries slower than 10s
Best Practice: Keep 7-14 days of compressed logs. This balances disk space with the ability to troubleshoot issues. Logs older than 2 weeks are rarely useful.
7

Preventing Future Issues

Implement these strategies to avoid running out of space in the future:

1. Set Up Monitoring & Alerts

# Install disk monitoring
sudo apt install monitoring-plugins-basic -y

# Create alert script
sudo nano /usr/local/bin/disk-alert.sh

Alert script example:

#!/bin/bash
THRESHOLD=80
CURRENT=$(df / | grep / | awk '{print $5}' | sed 's/%//g')

if [ "$CURRENT" -gt "$THRESHOLD" ]; then
    echo "Disk usage is ${CURRENT}%" | mail -s "Disk Alert" [email protected]
fi
# Make executable
sudo chmod +x /usr/local/bin/disk-alert.sh

# Add to crontab (check hourly)
(crontab -l 2>/dev/null; echo "0 * * * * /usr/local/bin/disk-alert.sh") | crontab -

2. Automated Cleanup Jobs

# Create cleanup cron job
sudo crontab -e

# Add these lines:
# Clean temp files weekly
0 2 * * 0 find /tmp -type f -atime +7 -delete

# Clean old logs monthly
0 3 1 * * find /var/log -type f -name "*.gz" -mtime +30 -delete

# Clean package cache monthly
0 4 1 * * apt clean && apt autoremove -y

3. Move Data to External Storage

Offload large files to external storage:

  • Move backups to object storage (S3, Backblaze B2)
  • Use external database servers for large datasets
  • Mount additional volumes for user uploads
  • Store media files on CDN/external storage
  • Archive old logs to cold storage

4. Implement Quotas

# Enable quotas for users
sudo apt install quota -y

# Edit /etc/fstab and add usrquota,grpquota
sudo nano /etc/fstab

# Enable quotas
sudo quotacheck -cugm /
sudo quotaon -v /

# Set user quota (example: 10GB limit)
sudo setquota -u username 10000000 11000000 0 0 /

5. Regular Maintenance Schedule

Recommended Maintenance Schedule:

Daily: Monitor disk usage alerts
Weekly: Review large file growth, clean temp files
Monthly: Clean package cache, archive old logs, review backup retention
Quarterly: Audit disk usage, optimize databases, consider upgrades
Pro Tip: Set up monitoring before you have problems. Getting alerts at 80% usage gives you time to clean up proactively instead of scrambling when systems are failing.
8

When to Upgrade Storage

Sometimes cleanup isn't enough - you genuinely need more space:

Signs You Need More Storage

Upgrade Indicators

  • ✓ Constantly above 80% even after cleanup
  • ✓ Rapid growth (10%+ per month)
  • ✓ Running out of files to delete
  • ✓ Legitimate data growth from business
  • ✓ Need to keep more backups/logs

Not Ready for Upgrade

  • ✗ Haven't tried cleanup yet
  • ✗ Lots of old backups/logs
  • ✗ Package cache not cleared
  • ✗ No monitoring in place
  • ✗ Temporary spike in usage

Upgrade Options

Option 1: Add Additional Volume

Best for: Specific data growth (databases, uploads, backups)

  • Add 50GB-500GB volumes as needed
  • Mount to specific directories (/var/lib/mysql, /var/backups)
  • Cost-effective and flexible
  • Can be done without downtime

Typical cost: $5-15/month per 100GB

Option 2: Upgrade VPS Plan

Best for: Overall resource constraints (CPU + RAM + Disk)

  • Get more storage + better performance
  • Simpler management (one disk)
  • May require downtime for migration
  • Good if you need CPU/RAM too

Typical cost: +$10-30/month for next tier

Option 3: Object Storage for Backups/Media

Best for: Backups, user uploads, media files, archives

  • Very cost-effective ($0.005-0.02/GB/month)
  • Unlimited scalability
  • Perfect for infrequently accessed data
  • Requires application integration

Typical cost: $1-5/month for 500GB

Contact X-ZoneServers for Upgrades

Our team can help you:

  • Analyze your storage usage and recommend the best solution
  • Add block storage volumes (with setup assistance)
  • Upgrade to a higher-tier VPS plan
  • Configure object storage integration
  • Migrate data with zero downtime
Free Consultation: Contact our support team at [email protected] or via live chat for a free storage assessment and upgrade recommendation.