In a very similar question, the accepted answer suggests to do:
dd if=/dev/zero of=/tmp/output bs=8k count=10k;
to measure the write speed of a hard disk.
My tests were inconclusive. I have an LVM-based software RAID 1 setup with two ordinary hard disks. When running the aforementioned command, it gives the following result:
10240+0 records in
10240+0 records out
83886080 bytes (84 MB) copied, 0.0784284 s, 1.1 GB/s
Not that I'm unhappy to have a RAID array which can store 1 GB per second, but I would still like to know:
Why am I getting wrong results? Is it something to do with cache? RAID? LVM?
How to get the actual file-level write performance?
For no apparent reason your test file size is really small, only 80MB. A small file like that will make the result less accurate, but also will mean than the entire operation may be satisfied by cache and not test true filesystem performance.
To fix I'd make the file size much larger, say a couple of GB.
You probably need to do a
sync
afterwards, and measure the whole result. As intime 'dd if=/dev/zero of=/tmp/output bs=1M count=2k && sync'
or something.Make sure /tmp is not actually a tmpfs in which case it's stored in memory, not on disc. Make sure the destination partition is a disk partition.
Copying from /dev/zero can in some cases result in artificially high performance when the disk media performs its own data compression, because a string of zeroes compresses down to virtually nothing. This will be a particular issue on SSDs. Make sure your test data is random enough not to compress, such as JPEG files or MP4 videos (and not the same file over and over again, because that will still compress well). Random data will work best if it's pre-generated, don't generate it on the fly because that will artifically slow down the transfer.