How to check the performance of a hard drive (Either via terminal or GUI). The write speed. The read speed. Cache size and speed. Random speed.
How to check the performance of a hard drive (Either via terminal or GUI). The write speed. The read speed. Cache size and speed. Random speed.
Terminal method
hdparm
is a good place to start.sudo hdparm -v /dev/sda
will give information as well.dd
will give you information on write speed.If the drive doesn't have a file system (and only then), use
of=/dev/sda
.Otherwise, mount it on /tmp and write then delete the test output file.
Graphical method
gnome-disks
How to benchmark disk I/O
Article
Is there something more you want?
Suominen is right, we should use some kind of sync; but there is a simpler method, conv=fdatasync will do the job:
If you want accuracy, you should use
fio
. It requires reading the manual (man fio
) but it will give you accurate results. Note that for any accuracy, you need to specify exactly what you want to measure. Some examples:Sequential READ speed with big blocks (this should be near the number you see in the specifications for your drive):
Sequential WRITE speed with big blocks (this should be near the number you see in the specifications for your drive):
Random 4K read QD1 (this is the number that really matters for real world performance unless you know better for sure):
Mixed random 4K read and write QD1 with sync (this is worst case number you should ever expect from your drive, usually less than 1% of the numbers listed in the spec sheet):
Increase the
--size
argument to increase the file size. Using bigger files may reduce the numbers you get depending on drive technology and firmware. Small files will give "too good" results for rotational media because the read head does not need to move that much. If your device is near empty, using file big enough to almost fill the drive will get you the worst case behavior for each test. In case of SSD, the file size does not matter that much.However, note that for some storage media the size of the file is not as important as total bytes written during short time period. For example, some SSDs have significantly faster performance with pre-erased blocks or it might have small SLC flash area that's used as write cache and the performance changes once SLC cache is full (e.g. Samsung EVO series which have 20-50 GB SLC cache). As an another example, Seagate SMR HDDs have about 20 GB PMR cache area that has pretty high performance but once it gets full, writing directly to SMR area may cut the performance to 10% from the original. And the only way to see this performance degration is to first write 20+ GB as fast as possible and continue with the real test immediately afterwards. Of course, this all depends on your workload: if your write access is bursty with longish delays that allow the device to clean the internal cache, shorter test sequences will reflect your real world performance better. If you need to do lots of IO, you need to increase both
--io_size
and--runtime
parameters. Note that some media (e.g. most cheap flash devices) will suffer from such testing because the flash chips are poor enough to wear down very quickly. In my opinion, if any device is poor enough not to handle this kind of testing, it should not be used to hold any valueable data in any case. That said, do not repeat big write tests for 1000s of times because all flash cells will have some level of wear with writing.In addition, some high quality SSD devices may have even more intelligent wear leveling algorithms where internal SLC cache has enough smarts to replace data in-place if its being re-written while the data is still in SLC cache. For such devices, if the test file is smaller than total SLC cache of the device, the full test always writes to SLC cache only and you get higher performance numbers than the device can support for larger writes. So for such devices, the file size starts to matter again. If you know your actual workload it's best to test with the file sizes that you'll actually see in real life. If you don't know the expected workload, using test file size that fills about 50% of the storage device should result in a good average result for all storage implementations. Of course, for a 50 TB RAID setup, doing a write test with 25 TB test file will take quite some time!
Note that
fio
will create the required temporary file on first run. It will be filled with pseudorandom data to avoid getting too good numbers from devices that try to cheat in benchmarks by compressing the data before writing it to permanent storage. The temporary file will be calledfio-tempfile.dat
in above examples and stored in current working directory. So you should first change to directory that is mounted on the device you want to test. Thefio
also supports using direct media as the test target but I definitely suggest reading the manual page before trying that because a typo can overwrite your whole operating system when one uses direct storage media access (e.g. accidentally writing to OS device instead of test device).If you have a good SSD and want to see even higher numbers, increase
--numjobs
above. That defines the concurrency for the reads and writes. The above examples all havenumjobs
set to1
so the test is about single threaded process reading and writing (possibly with the queue depth or QD set withiodepth
). High end SSDs (e.g. Intel Optane 905p) should get high numbers even without increasingnumjobs
a lot (e.g.4
should be enough to get the highest spec numbers) but some "Enterprise" SSDs require going to range32
-128
to get the spec numbers because the internal latency of those devices is higher but the overall throughput is insane. Note that increasingnumbjobs
to high values usually increases the resulting benchmark performance numbers but rarely reflects the real world performance in any way.I would not recommend using
/dev/urandom
because it's software based and slow as pig. Better to take chunk of random data on ramdisk. On hard disk testing random doesn't matter, because every byte is written as is (also on ssd with dd). But if we test dedupped zfs pool with pure zero or random data, there is huge performance difference.Another point of view must be the sync time inclusion; all modern filesystems use caching on file operations.
To really measure disk speed and not memory, we must sync the filesystem to get rid of the caching effect. That can be easily done by:
with that method you get output:
so the disk datarate is just 104857600 / 0.441 = 237772335 B/s --> 237MB/s
That is over 100MB/s lower than with caching.
Happy benchmarking,
If you want to monitor the disk read and write speed in real-time you can use the iotop tool.
This is useful to get information about how a disk performs for a particular application or workload. The output will show you read/write speed per process, and total read/write speed for the server, similar to
top
.Install
iotop
:Run it:
This tool is helpful to understand how a disk performs for a specific workload versus more general and theoretical tests.
Write speed
Block size is actually quite large. You can try with smaller sizes like 64k or even 4k.
Read speed
Run the following command to clear the memory cache
Now read the file which was created in write test:
bonnie++ is the ultimate benchmark utility I know for linux.
(I'm currently preparing a linux livecd at work with bonnie++ on it to test our windows-based machine with it!)
It takes care of the caching, syncing, random data, random location on disk, small size updates, large updates, reads, writes, etc. Comparing a usbkey, a harddisk (rotary), a solid-state drive and a ram-based filesystem can be very informative for the newbie.
I have no idea if it is included in Ubuntu, but you can compile it from source easily.
http://www.coker.com.au/bonnie++/
some hints on how to use bonnie++
A bit more at: SIMPLE BONNIE++ EXAMPLE.