I'm trying to evaluate some SSDs for a video on-demand use-case. We've done some benchmarking tests on them, but we'd like to get an idea of the number of video streams that they can support with a more realistic load test than typical benchmark tools.
So far, I've done this:
- fill the SSD with movie files
- remotely mount that SSD on other servers
- run a script from these other servers, that launches a bunch of VLC instances randomly on loop on one of these files. (NB: the VLC instances are running with the options
--vout dummy --aout dummy --codec dummy
so that it requests the file continuously, but doesn't do any decode on it to save on CPU)
I also have a settop box that decodes from that same SSD. The idea is to see if we can visually notice when the SSD performance starts to fall apart.
I'm getting decent results but the main problem is that the load servers hit a limit (in the 700-800 range with 10-12GB of RAM) in the number of streams they can grab. It looks like it's due to a lot of swapping happening all at once, pushing iowait
sky high and making the server almost irresponsive.
To put it in a nutshell, my questions are:
- does that setup make sense?
- can you think of another way to do this?
- can you think of tweaks not to have the load servers become irresponsive after a certain number of streams? (I played a bit with
/proc/sys/vm/swappiness
but it didn't seem to make a difference)
Thanks,
Tim