2

If I am trying to destroy a single file, what is the best method?

shred -uvz -n500 file
dd if=/dev/null of=file
dd if=/dev/zero of=file

I hexdumped a file that was in the middle of shredding, and it was appropriately mangled with what shred said it was writing, but is it secure?

For the sake of this experiment, let's assume any cached versions of this file and anything else that may connect to this file are completely gone. The only remaining trace of the file is the command ran from my history...

user50625
  • 199
  • 2
  • 8
  • What are you trying to be "secure" against? You haven't given any threat model. I'm not sure what "anything else that may connect to this file" means either... if you're assuming all data is gone, isn't that your objective? – David Jul 09 '14 at 23:13
  • I am trying to be secure against any kind of file recovery, up to expert-level services. – user50625 Jul 09 '14 at 23:20

1 Answers1

5

Depending on the filesystem, physical device, etc., those will have varying levels of effectiveness. I'll take a look at each one, and generally I'm assuming ext4 on magnetic hard disk, as it's the most common default Linux FS right now. I'm also assuming recovery starts immediately after the command completes: i.e., there are no intermediate writes to this filesystem.

shred -uvz -n500 file

500 passes is overkill. Generally, it's believed that a single pass is sufficient on modern drives (Source 1, Source 2, Source 3), so the 3 passes shred offers by default should be plenty. Of course, this assumes that you're overwriting in-place (the same disk blocks) which is true of default ext4 setups. If you have odd mount options (like journal=data) you'll find this isn't true anymore, and any form of overwriting won't be enough unless you overwrite the entire partition.

dd if=/dev/null of=file

/dev/null gives EOF upon reading. Consequently, this just truncates the file, which is equivalent to echo -n ''>file. All of the data blocks will remain on disk, and can be recovered. If the file is not fragmented (all data blocks are consecutive) this is trivial, if it is fragmented, it may take much more work to reconstruct the file.

dd if=/dev/zero of=file

By default (unless conv=notrunc is passed), dd opens the output file with O_TRUNC, which causes the filesystem to immediately de-allocate the existing blocks of the file, then starts writing, triggering new allocations. Will these be the same blocks as freed by the truncation? Maybe, maybe not. It depends on how much space is free, whether the file was fragmented, and your mount options both now and at the time the file was created. (Switching extents on/off would be a big difference.) Also, this command, as given, doesn't specify any point to stop -- you'll keep writing until the partition is full. (Which actually gives you a good chance of overwriting your data blocks, but will probably take much longer than is desirable.)

Finally, you didn't specify what media you're doing this on. I assumed a magnetic disk above because the answer is much more straight forward. On an SSD, it's a completely different world. Wear leveling means you're never going to be overwriting the same blocks, and there are many blocks that exist but aren't visible from the operating system. Even the built-in security features of SSDs won't necessarily erase your data. In the case of an SSD, my best recommendation is to encrypt with a strong passphrase, and when you're ready to recycle the drive, hit it with both an overwrite of zeros and an ATA secure erase. Between the two, there's a very good chance you'll have destroyed the master key, and even if you haven't, forensic recovery will be highly doubtful.

David
  • 16,074
  • 3
  • 51
  • 74