I have hunch that a certain intermittent bug might only manifest itself when there is a slow disk read rate. Troubleshooting is difficult because I can't reliably reproduce it.
Short of simply gobbling IO with a high priority process, is there any way for me to simulate having a slow hard drive?
Use nbd, the Network Block Device, and then rate limit access to it using say
trickle
.That'll slow you down :)
It'll force you to read from disk, instead of taking advantage of the cached page.
If you really wanted to get sophisticated you could do something like fake a read error every nth time using the scsi fault injection framework.
http://scsifaultinjtst.sourceforge.net/
Have a USB 1.1 hub? Or a slow SD card? They'll get you down to under 10mbps.
This is by no means a complete solution, but it may help in conjunction with other measures: There is an I/O scheduler much like a process scheduler, and it can be tweaked.
Most notably, you can actually choose amongst different schedulers:
deadline
may help you get more strongly reproducible results.noop
, as its name implies, is insanely dumb, and will enable you to wreck absolute havoc on I/O performance with little effort.anticipatory
andcfq
both try to be smart about it, thoughcfq
is generally the smarter of the two. (As I recall,anticipatory
is actually the legacy scheduler from right before the kernel started supporting multiple schedulers.)You can use a Virtual Machine and throttle disk access ... here are some tips about how do it in Virtualbox 5.8. Limiting bandwidth for disk images https://www.virtualbox.org/manual/ch05.html#storage-bandwidth-limit
Apart from trying to slow down the hard drive itself, you could try using filesystem benchmarking tools such as bonnie++ which can cause a great deal of disk I/O.
You could try running a copy of a large file, such as an iso of the Ubuntu install cd, and run it twice. That should slow your drive down quite a bit.
I have recently figured out a setup where I've
google-drive-ocamlfuse
If 16 seconds latency is not slow enough, you can just unplug your router.
For reference, here is the original use case, where I got the idea for this: https://github.com/goavki/apertium-apy/pull/76#issuecomment-355007128
Why not run
iotop
and see if the process that you are trying to debug is causing lots of disk reads/writes?how about
make -j64
? in articles describing that new 200line performance patch,make -j64
was a task eating a lot of computer resources