5

I ran a process that was writing stderr log messages to a file and didn't realize how fast the log file would grow. The result is that the log file filled up the entire rest of my hard drive and now when I try to remove the file using rm, the computer freezes. I'm running OS X Mountain Lion and my primary hard drive is 2TB. The log file is consuming over 1TB of disk space.

Do I need to just let rm run for a long time to remove a 1TB file? Should I just restore from a time machine backup?

ewwhite
  • 198,150
RyanH
  • 153

2 Answers2

4

You want to zero the log file from the command line.

Something like : > /path/to/file.log

See: Is there a way to delete 100GB file on Linux without thrashing IO / load?

ewwhite
  • 198,150
  • Any explanation for why this would be faster? – Jeff Ferland Sep 20 '12 at 17:38
  • When there's no (other) write access to the file, truncate vs. unlink shouldn't make much of a difference. – Ansgar Wiechers Sep 20 '12 at 20:36
  • I assumed the app was still writing. – ewwhite Sep 20 '12 at 20:41
  • This worked, although there was nothing still writing to the file (because the disk had completely filled up as a result of writing to it). It's not clear to me why this worked and rm did not. – RyanH Sep 20 '12 at 20:44
  • @RyanH Asking that question will only result in disparaging remarks about HFS and the Mac OS kernel's disk-scheduling algorithm (both of which SUCK and cause interactive performance to go to the sad place if you do something disk-intensive) – voretaq7 Sep 20 '12 at 20:47
  • shrug Still a mystery, then. – Jeff Ferland Sep 20 '12 at 20:47
  • 1
    I'm not sure if this is what was happening in this case, but in general a file's space isn't freed until its last link is removed (e.g. with rm) AND the last process that had it open closes it. If the process writing to it still had it open, rm won't have recovered the space -- but truncating it (e.g. with : >) will, even if it's still open in some other process. – Gordon Davisson Sep 21 '12 at 06:40
1

Do I need to just let rm run for a long time to remove a 1TB file?

Yes. It's very likely that a file which filled up the disk is spread into fragments everywhere and the filesystem needs to adjust the free list entries for every single fragment. This takes a while.

Should I just restore from a time machine backup?

This is an option. It's debatable whether it would take longer. If you think you might lose anything, just wait for rm.

Jeff Ferland
  • 20,687
  • 2
  • 63
  • 85
  • I tried letting rm run for more than a day and it didn't finish and the computer was completely unresponsive. The : > file approach finished within a few minutes. – RyanH Sep 20 '12 at 20:46