When df is reporting your filesystem is almost full, but when you start using du to locate the offending directory and nothing lines up, what do you do? The example scenario is below:
[root@web01 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 40G 36G 1.5G 97% / [root@web01 ~]# du -sh /* |grep G 1.8G /home 1.8G /root 1.4G /usr 12G /var
So there is almost 19G of space that is unaccounted for! This can happen when a large file is deleted, but the file itself is still held open by a running process. You can see a listing of open files on your server lsof. To find files that are held open by a process, but are marked as deleted, run:
[root@web01 ~]# lsof |grep del vsftpd 1479 root txt REG 202,1 167536 90144 /usr/sbin/vsftpd (deleted) mysqld 8432 mysql 4u REG 202,1 0 1155075 /tmp/ib4ecsWH (deleted) mysqld 8432 mysql 5u REG 202,1 0 1155076 /tmp/ib8umUE8 (deleted) mysqld 8432 mysql 6u REG 202,1 0 1155077 /tmp/ib0CGmnz (deleted) mysqld 8432 mysql 7u REG 202,1 0 1155078 /tmp/ibGK9i6Z (deleted) mysqld 8432 mysql 10w REG 202,1 19470520347 1114270 /var/lib/mysql/mysql-general.log (deleted) mysqld 8432 mysql 11u REG 202,1 0 1155079 /tmp/ib4M9ZPq (deleted)
Notice the file: /var/lib/mysql/mysql-general.log (deleted). When doing the math on the file size, it works out to be about 18G. That would account for the ‘missing’ space that the command du is not seeing almost perfectly.
So in this case, the culprit is the mysql general log, which is typically used for debugging purposes only as it logs every query that MySQL uses. The problem with leaving this log enabled is that it can quickly fill up your disks. So MySQL simply needs to be restarted so the process will release the file, and therefore allow the space to be reclaimed by the file system. Of course, if the general log is no longer being used, it should probably be disabled in the my.cnf.