3

ORIGINAL QUESTION:

We have servers which we want to wipe and sell due to an environmentally friendly scheme, recycling, reducing carbon footprint etc. The servers have a raid configuration.

After doing some research, I am thinking of doing the following:

  1. Use a Linux Live CD to boot into server
  2. Use a strong password date +%s | sha256sum | base64 | head -c 32 ; echo and do a full disk encryption
  3. Wipe the raid configuration using dd if=/dev/zero of =/dev/cciss/c0d0 bs=1M, dd if=/dev/random of =/dev/cciss/c0d0 bs=1M, dd if=/dev/zero of =/dev/cciss/c0d0 bs=1M
  4. Wipe the HPA using hdparm -N /dev/cciss/c0d0, hdparm -N p[value goes here] /dev/cciss/c0d0
  5. Wipe the DCO using hdparm --dco-identify /dev/cciss/c0d0, hdparm --dco-restore /dev/cciss/c0d0, hdparm --yes-i-know-what-i-am-doing --dco-restore /dev/cciss/c0d0
  6. Bad sectors? sudo badblocks -n /dev/cciss/c0d0

Too little, okay, overkill?

The servers have sensitive data which must never be viewed by anyone.

Also, do I need to consider anything else, like wiping the RAM or any other possible area on the server where residue may be left?


UPDATE 1:

All disks are magnetic 2.5" scsi disks.

oshirowanen
  • 755
  • 3
  • 10
  • 22

2 Answers2

4

There is no need to do full disk encryption in step 2. Simply do

dd if=/dev/zero of=/dev/sda bs=1M

where /dev/sda is the hard disk device (change it to whatever your disk is called). You might also want to change the block size.

If you don't like the overwrite by 0s (there are people who claim that such overwrites aren't enough to actually make the old values unrecoverable, though you need special hardware to recover the overwritten data), use /dev/urandom instead, for as many times as you deem secure. You can also use the program shred.

You could probably stop after step 2.

I'm not sure why you overwrite your RAID config block 3 times in step 3. If you overwrite the whole disk in step 2, it should already be gone. Also, AFAIK, it contains no sensitive information.

I'm also not sure about your step 4. Why do you think that this wipes the HPA? Shouldn't you do this before step 2, disable the HPA by making it 0 blocks long, and then overwrite it in step 2?

In step 6, you're scanning for bad sectors. You're not doing anything else. But since you're already scanning for them, you might as well use badblocks -w and do a destructive scan, which will overwrite every sector and make step 2 unnessecary.

Hard drives set aside a few sectors that can be swapped for defective sectors. These defective sectors are often still readable and they might contain sensitive information. So you should take care of those, too. They do not get overwritten in step 2 (nor, I think, in step 6)

Out of Band
  • 9,293
  • 1
  • 23
  • 30
  • About the 3 overwrites, something to do with some DOD standard? The server has a hardware raid card. Plus, thank you, I will make use of all these adjustments. – oshirowanen Feb 22 '18 at 17:00
  • How would I go about taking care of the problematic sectors described in your last paragraph? – oshirowanen Feb 22 '18 at 17:04
  • 3
    This is why it's much easier to encrypt everything in the first place, that way you don't have to worry about bad sectors or wear leveling. – AndrolGenhald Feb 22 '18 at 17:42
  • Yes, but it's a bit late for this insight now :-) – Out of Band Feb 22 '18 at 18:02
  • By default, shred uses the RC4 algorithm for generating random numbers. I'm not sure if it discards the first few kilobytes of the keystream, but if it does not, then it will be possible to know that the drive was wiped rather than any other alternative (e.g. encrypted but no password, etc). As such it might be better to use shred --random-source=/dev/urandom instead. – forest Feb 22 '18 at 23:01
  • @Pascal Yes, SSDs do have secure wipe capabilities. In fact, all modern ATA-compliant drives do, including HDDs. SSDs were just some of the first to implement SED, a much faster version of secure erase which simply discards a key used for transparent hardware encryption. Both modes will also attempt to wipe defective sectors if possible (which won't be done by writing to the block device). – forest Feb 22 '18 at 23:02
  • Well all wiping defective sectors with secure erase does is ignore the table of defective sectors and try to overwrite them anyway. I believe it would have a similar effect if you used hdparm to mark all bad sectors as good and then attempt overwriting them (even if it takes a few tries) using a low-level ATA command. Obviously sectors that are too badly damaged will never be writable, but many of them are just failing and still technically writable (if unreliably). – forest Feb 22 '18 at 23:33
  • FWIW, secure erase that takes 2 minutes (it's actually nearly instant, but 2 minutes is the minimum duration the drive can report) must be using SED, which takes care of defective sectors since it uses transparent encryption rather than actual overwriting. – forest Feb 22 '18 at 23:34
  • @Pascal, yes the disks are magnetic 2.5" scsi disks. – oshirowanen Feb 23 '18 at 10:32
0

If you don't absolutely need to sell the drives, pull them and send them to a recycler for shredding. Physically destroying the drives is a much more secure (and faster) way of dealing with them than all the iterations you're going through. There will be info on each fragment of the drive and it is technically readable, but those fragments are small and reassembling them into anything meaningful is a very daunting task. If you shred multiple disks in one batch, the signal-noise ratio is too high to have any meaningful chance of getting your data off those platters.

If you want you can go through step 1, making the data encrypted before it's destroyed, that would make that task impossible.

baldPrussian
  • 2,778
  • 2
  • 11
  • 14