Running out of disk space can cause serious issues: websites going down, databases failing to write, services crashing, and system instability. This comprehensive guide helps you diagnose disk space problems, identify what's consuming storage, and implement both immediate fixes and long-term prevention strategies.
When disk space reaches 100%, your server can become unstable or unresponsive. Do not delete files randomly - follow this guide to safely free up space without breaking your system.
Recognizing the warning signs early can prevent serious problems:
No space left on device
write failed, filesystem is full
cannot create temp file for here-document: No space left on device
MySQL: Error writing file (Errcode: 28 - No space left on device)
First, determine how much space you're actually using:
# Check overall disk usage
df -h
# Check specific partition
df -h /
# Human-readable with filesystem type
df -hT
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 50G 45G 5G 90% /
/dev/vda2 100G 80G 20G 80% /var
Sometimes you run out of inodes (file count limit) even with space available:
# Check inode usage
df -i
# If IUse% is near 100%, you have too many files
Identify what's consuming your disk space:
# Check root directory usage
sudo du -h --max-depth=1 / | sort -hr | head -20
# Check specific directory
sudo du -h --max-depth=1 /var | sort -hr
# More detailed with percentages
sudo du -hx --max-depth=2 / | sort -hr | head -30
# Top 20 largest files on the system
sudo find / -type f -printf '%s %p\n' 2>/dev/null | sort -nr | head -20 | awk '{print $1/1024/1024 "MB", $2}'
# Largest files in /var
sudo find /var -type f -size +100M -exec ls -lh {} \; 2>/dev/null | awk '{print $5, $9}'
# Files larger than 1GB
sudo find / -type f -size +1G 2>/dev/null
Use ncdu for an easier, interactive experience:
# Install ncdu
sudo apt install ncdu -y # Debian/Ubuntu
sudo yum install ncdu -y # CentOS/RHEL
# Analyze root directory
sudo ncdu /
# Analyze specific directory
sudo ncdu /var/log
# Navigation:
# - Arrow keys: navigate
# - Enter: open directory
# - d: delete file/directory
# - q: quit
ncdu is the easiest way to find space hogs. Install it first, then run sudo ncdu / to visually explore your filesystem.
These are the usual suspects that consume excessive disk space:
Location: /var/log/
# Check log directory size
sudo du -sh /var/log/*
# Common space hogs:
# - /var/log/nginx/access.log
# - /var/log/apache2/access.log
# - /var/log/syslog
# - /var/log/mysql/mysql.log
# - /var/log/journal/
Location: /var/cache/apt/ or /var/cache/yum/
# Check cache size
sudo du -sh /var/cache/apt/archives/ # Debian/Ubuntu
sudo du -sh /var/cache/yum/ # CentOS/RHEL
Location: /tmp/ and /var/tmp/
# Check temp directories
sudo du -sh /tmp /var/tmp
Location: /var/lib/mysql/ or /var/lib/postgresql/
# Check database size
sudo du -sh /var/lib/mysql/
sudo du -sh /var/lib/postgresql/
# Check MySQL database sizes
mysql -u root -p -e "SELECT table_schema AS 'Database',
ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS 'Size (MB)'
FROM information_schema.tables GROUP BY table_schema;"
Location: /var/backups/ or custom backup directory
# Find backup files
sudo find / -name "*.tar.gz" -o -name "*.zip" -o -name "*.bak" 2>/dev/null | xargs -I {} ls -lh {}
# Check backup directory
sudo du -sh /var/backups/
Location: /boot/
# Check boot partition
df -h /boot
# List installed kernels
dpkg --list | grep linux-image
# Current kernel
uname -r
Files deleted but still held open by processes
# Find deleted files still using space
sudo lsof +L1
# Or with better formatting
sudo lsof +L1 | awk '$5 == "0" {sum+=$7} END {print "Total:", sum/1024/1024, "MB"}'
# Debian/Ubuntu
sudo apt clean
sudo apt autoclean
sudo apt autoremove
# CentOS/RHEL
sudo yum clean all
# Shows space that will be freed
apt-get clean -s
✓ Safe - Can reclaim 200MB-2GB instantly
# Clear /tmp (careful with running applications)
sudo rm -rf /tmp/*
# Clear old temp files (older than 7 days)
sudo find /tmp -type f -atime +7 -delete
sudo find /var/tmp -type f -atime +7 -delete
✓ Usually safe - Can reclaim 100MB-1GB
# Truncate (empty) a log file without deleting it
sudo truncate -s 0 /var/log/nginx/access.log
sudo truncate -s 0 /var/log/syslog
# Compress old logs
sudo gzip /var/log/nginx/access.log.1
sudo gzip /var/log/apache2/access.log.1
# Delete old compressed logs
sudo find /var/log -name "*.gz" -mtime +30 -delete
# Clear journal logs older than 3 days
sudo journalctl --vacuum-time=3d
# Limit journal size to 100MB
sudo journalctl --vacuum-size=100M
✓ Safe if logs already archived - Can reclaim 1GB-10GB
# List all kernels
dpkg --list | grep linux-image
# Remove old kernels (keep current + 1 previous)
sudo apt autoremove --purge
# Manual removal (NEVER delete the current kernel!)
# First check current kernel:
uname -r
# Then remove old ones (example):
sudo apt remove linux-image-5.4.0-42-generic
⚠ Moderate risk - Can reclaim 200MB-1GB per kernel
# Find processes holding deleted files
sudo lsof +L1
# Restart the service holding the file (example with nginx)
sudo systemctl restart nginx
# Or restart the specific process
sudo kill -HUP [PID]
✓ Safe - Releases space immediately
# Remove unused Docker data
docker system prune -a
# Remove all stopped containers
docker container prune
# Remove unused images
docker image prune -a
# Remove unused volumes
docker volume prune
⚠ Moderate risk - Can reclaim GBs of space
sudo apt clean && \
sudo journalctl --vacuum-time=3d && \
sudo find /tmp -type f -atime +7 -delete && \
sudo find /var/log -name "*.gz" -mtime +30 -delete
Log files are the #1 cause of disk space issues. Proper log management is essential:
Logrotate automatically rotates, compresses, and deletes old logs:
# Check if logrotate is installed
which logrotate
# View nginx log rotation config
cat /etc/logrotate.d/nginx
# Example logrotate configuration
sudo nano /etc/logrotate.d/custom-app
/var/log/myapp/*.log {
daily # Rotate daily
rotate 7 # Keep 7 days of logs
compress # Compress old logs
delaycompress # Compress on 2nd rotation
missingok # Don't error if log missing
notifempty # Don't rotate empty logs
create 0640 www-data www-data
sharedscripts
postrotate
systemctl reload nginx
endscript
}
For emergencies when you need space immediately:
# Find and delete large log files
sudo find /var/log -type f -size +100M -delete
# Keep only last 3 days of logs
sudo find /var/log -type f -name "*.log" -mtime +3 -delete
# Truncate all .log files (use with caution!)
sudo find /var/log -type f -name "*.log" -exec truncate -s 0 {} \;
Reduce log verbosity to prevent future issues:
# Nginx - reduce access log verbosity
# Edit /etc/nginx/nginx.conf
access_log /var/log/nginx/access.log combined;
# Change to:
access_log /var/log/nginx/access.log combined buffer=32k;
# Or disable access logging entirely (not recommended for production)
access_log off;
# MySQL slow query log
# Edit /etc/mysql/mysql.conf.d/mysqld.cnf
slow_query_log = 0 # Disable
# Or
long_query_time = 10 # Only log queries slower than 10s
Implement these strategies to avoid running out of space in the future:
# Install disk monitoring
sudo apt install monitoring-plugins-basic -y
# Create alert script
sudo nano /usr/local/bin/disk-alert.sh
Alert script example:
#!/bin/bash
THRESHOLD=80
CURRENT=$(df / | grep / | awk '{print $5}' | sed 's/%//g')
if [ "$CURRENT" -gt "$THRESHOLD" ]; then
echo "Disk usage is ${CURRENT}%" | mail -s "Disk Alert" [email protected]
fi
# Make executable
sudo chmod +x /usr/local/bin/disk-alert.sh
# Add to crontab (check hourly)
(crontab -l 2>/dev/null; echo "0 * * * * /usr/local/bin/disk-alert.sh") | crontab -
# Create cleanup cron job
sudo crontab -e
# Add these lines:
# Clean temp files weekly
0 2 * * 0 find /tmp -type f -atime +7 -delete
# Clean old logs monthly
0 3 1 * * find /var/log -type f -name "*.gz" -mtime +30 -delete
# Clean package cache monthly
0 4 1 * * apt clean && apt autoremove -y
Offload large files to external storage:
# Enable quotas for users
sudo apt install quota -y
# Edit /etc/fstab and add usrquota,grpquota
sudo nano /etc/fstab
# Enable quotas
sudo quotacheck -cugm /
sudo quotaon -v /
# Set user quota (example: 10GB limit)
sudo setquota -u username 10000000 11000000 0 0 /
Sometimes cleanup isn't enough - you genuinely need more space:
Best for: Specific data growth (databases, uploads, backups)
Typical cost: $5-15/month per 100GB
Best for: Overall resource constraints (CPU + RAM + Disk)
Typical cost: +$10-30/month for next tier
Best for: Backups, user uploads, media files, archives
Typical cost: $1-5/month for 500GB
Our team can help you:
Diagnose and fix high CPU and memory usage
Comprehensive backup management and disaster recovery
Fix SSH and RDP connection problems
Complete Nginx setup with SSL certificates
Our support team can help analyze your disk usage and recommend solutions.