I need to find out about the impact different levels of disk I/O have on my application. To do so I am looking for a way to generate different amounts of disk load while I am running the application in a benchmark mode.
We currently get complaints from users about slow performance, however the information we get is very thin, so we cannot yet determine where exactly the problem is. I/O is the primary suspect, however we'd like to know for sure.
For starters I would be fine with something to create a configurable continuous read or write load, e. g. from /dev/zero
to a tmp file or something. I know I can use dd
, but this will push the disk to the limit immediately whereas I'd like some kind of throttle to make different test runs with increasing amounts of background traffic.
To make things a little more challenging, this has to run on RedHat9 boxes, which means kernel 2.4. So ideally this would be some kind of script that makes use of the default tools present anyway.
You could make a big file (eg 1GB) and then use
rsync
to copy the file.rsync
has a built in option to limit the bandwith. So you could copy the file at 10KiB/sec, or 50KiB/sec or 2MiB/sec, etc.As far as I remember, Bonnie can create some load : http://www.coker.com.au/bonnie++/ If it does not work the way you want it to, compose a small bash script that will, using various utilities such as dd, rm, ls (or the mentioned rsync).
If you have a benchmark mode can you spin up multiple instances of it? If so you can keep running copies until you get something close to your production load.
You may use a scriptable tool to stress the disks with the proper pattern. For instance :
Both allow you to choose precisely how to balance read and write, IO size, etc.
rugg is a disk testing utility that may do the trick for you. You can control the number of threads it uses to perform its tests thus controlling the load on the disk(s) and the system. It also allows you to control whether it does its tests in parallel or sequentially, adding another level of control.