Gnome Disks (gnome-disks
- formerly known as palimpsest
) provides SMART and some benchmarking information. From what I gather, it used to be based on a command-line tool udisks
but these projects appear to have merged.
The new Gnome Disks utility appears only to show average results from the benchmarking tests. From screenshots, previous versions of palimpsest appear to have maximum and minimum responses in the results as well.
I'm interested in all results in the benchmarking - specifically I'm trying to find disks that are having a negative effect on users by weeding out disks with slow I/O in the worst-case. I also want to map this data over time so I need to be able to process/export it in a programmatic way.
I looked at udisksctl
(in the udisks2 package) but it appears just to be general information on the disks and some SMART information.
Is there a command-line tool which runs the old udisks
style benchmarking report and returns minimums and maximums as well?
I cant speak to the old udisks benchmarking report but perhaps
fio
will be of use to you.fio
is currently available for all versions of Ubuntu from Precise To ZestyYou can install it with
sudo apt-get install fio
after activating the Universe repositorySome quick testing indicates that you can choose the partition to test simply by insuring that the
pwd
(Present Working Directory) is on the partition that you wish to test.For instance, here's the results I get running it on my root partition which is on a Toshiba THNSNH128GBST SSD (my /dev/sda)
$
sudo fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=256M --numjobs=8 --runtime=60 --group_reporting randwrite: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
...Running in my home directory which is on a Western Digital WD2003FZEX-00Z4SA0 HDD with the same command results in the following output:
I trimmed out the output produced while it's running to keep this answer a readable size.
Explanation of output that I found interesting:
You can see that we get min, max average and standard deviation for all of these metrics.
slat indicates submission latency -
clat indicates completion latency. This is the time that passes between submission to the kernel and when the IO is complete, not including submission latency. In older versions of fio, this was the best metric for approximating application-level latency.
lat seems to be fairly new. It seems that this metric starts the moment the IO struct is created in fio and is completed right after clat, making this the one that best represents what applications will experience. This is the one that you'll probably want to graph.
bw Bandwidth is pretty self-explanatory except for the per= part. The docs say it's meant for testing a single device with multiple workloads, so you can see how much of the IO was consumed by each process.
When fio is run against multiple devices, as I did for this output, it can provide a useful comparison regardless of the fact that it's intended purpose is to test a specific workload.
I'm sure it comes as no surprise that the latency on the hard drive is much higher than that of the solid state drive.
Sources:
https://tobert.github.io/post/2014-04-17-fio-output-explained.html
https://github.com/axboe/fio/blob/master/README