I sometimes use dcfldd, because it has more features and is easier to use than regular dd. It gives a constant status and updates it fast, and it also has pattern input which is a lot faster than reading from /dev/zero or any other device.
As an example, let's say I wanted to wipe a drive with dcfldd I would do something like this
dcfldd pattern="00" of=/dev/hda bs=4096
Where it would write "00000000" to the drive byte by byte and then you can use the vf= function to verify the pattern.
But I noticed a bit of a problem and wanted to know if any of you can help. When I run this
dcfldd pattern="FF" of=/dev/hda bs=4096; sync
or
dcfldd pattern="11111111" of=/dev/hda bs=4096; sync
I can fill the drive with 1's and it does so very fast. I can fill a 74gb drive to let's say 5GB worth of 1's, after it has been zeroed. If I use a hex editor like xxd or hd in linux, I can see all of the 1's.
Though if I then run this command,
dcfldd pattern="00" of=/dev/hda bs=4096; sync
Let's say for only 1gb worth of 0's, there should still be 4gb worth of 1's. Seeing that I wrote 5gb of 1's and only 1gb of 0's over the 1's.
Though if I then look at a hex editor the drive is all 0's, even if the software states it only wrote 1gb of 0's. I have tried to run the program for as short as possible sending a SIGINT almost right after running.
Any idea on why this occurs and can you replicate it?
What happens if you specify a count of blocks instead of manually interrupting it?
then do
and compare the resulting times. Then look at the data on the disk to see if the boundary where ones change to zeros is where it ought to be (at about 200 Megabytes* for this example).
* That's real MB (1024*1024), not "maybe"-bytes.
To get any meaningful results, unless you wish to overwrite the whole partition or disk, you need to provide a count argument. In the examples you provides, you did not specify a count. I am at a lose to understand how you ever expected to get 5gb of ones or 1gb of zeros.