5

I'm learning bash, and what with all the confusion around the many, many different ways there are to zero-write a drive or transfer data from/to one (shred vs dd vs pv vs cat vs tee and so on), I'm already overwhelmed.

For now, I've reluctantly opted for dd as it seems the best command-line option out there for both uses. Given that, I want to be sure I'm using it as efficiently as possible.

I understand that by default dd runs with a block size of 512 bytes, and that increasing this with something like:

dd if=/dev/zero of=/dev/sdX bs=3M status=progress

...will make it write larger blocks and do so fewer times, therefore resulting in a faster run.

But if simply setting a larger block size will make the command run faster, what's to stop me from using bs=3G? What are the disadvantages to doing so, if any? What is the optimal block size Linux superusers would recommend using?

Mokubai
  • 92,720
Hashim Aziz
  • 12,732
  • 3
    Just having a larger block size isn't the answer, the most efficient block size can very by hardware, RAM, processor, other applications, etc... There is an article and script for testing/finding the right combination somewhere, I will see if I can find the link. – acejavelin Oct 19 '17 at 00:25
  • 4
    Ahh... here it is... Not really an answer, but it might put you on the right track... http://blog.tdg5.com/tuning-dd-block-size/ – acejavelin Oct 19 '17 at 00:27
  • 1
    @Vlastimil: It really makes no difference if you use dd, pv or cp: They all run as fast as the hardware allows. And please don't spread nonsense like "dd is obsolete for disk zeroing": dd is a particular tool with particular uses, and while it's old, "obsolete" has nothing to do with this. You use whatever tools works best in your scenario. – dirkt Oct 19 '17 at 06:37
  • "with all the confusion around the many, many different ways there are…" – You may find this question interesting: What's the difference between dd and cat for writing image files? – Kamil Maciorowski Oct 19 '17 at 15:11

1 Answers1

-1

The tool called hdparm might be something you can look into. It is a very low-level tool and has some potentially dangerous options. Here is the man page for the tool. It is powerful so use it at your own risk. A zero-write will probably be fastest performed by this tool.

Yokai
  • 299
  • Do you have any sources or reasoning to corroborate the claim that hdparm would be a faster tool than all those mentioned in my question? – Hashim Aziz Jan 14 '18 at 22:32
  • Check the links I added for documentation on the tool. The only method I know for testing the speed and time taken for a zero-write is to use different tools on the same test HDD and using the time command to see the performance times. The blue words in my answer are links to information on the tool. From what I can tell, it is written in assembler which is going to be the fastest execution. Next step up would be C/C++. You just have to test different tools out. – Yokai Jan 15 '18 at 15:08