I don't understand the performance I'm seeing from apache. I would expect that more concurrent apache requests would perform better than fewer, up to a point, but beyond 3 concurrent requests, overall performance is flat. For example, I see the same requests / sec if I've got 3 or 4 concurrent requests. With each additional concurrent request, the avg response time increases so that the overall request handling rate stays the same.
To test this, I created a new ubuntu 10.04 vm on slicehost. This is a 4 core vm. I set it up with
aptitude update
aptitude install apache2 apache2-utils curl
curl localhost/ # verify hello world static page works
Then I benchmarked the response time and reqs / sec.
Edit 4: I benchmarked with something like "for x in $(seq 1 40); do ab -n 10000 -c $x -q localhost/ | grep whatever; done".
The exact commands and data are at https://docs.google.com/spreadsheet/pub?hl=en_US&hl=en_US&key=0AurdDQB5QBe7dGtiLUc1SWdOeWQ4dGo3VDI5Yk8zbWc&output=html
Cpu usage was about 25% on each core while running the tests.
Edit 2: Memory usage was at 45 / 245 MB according to htop.
Edit 1: I just tried the same thing on an ubuntu 11.04 vm and the overall issue is the same, but the performance is even worse: it gets around 2100 reqs / sec for most levels of concurrency and uses about 50% cpu on each core.
Edit 3: I tried it on real hardware and saw a peak reqs / sec around 4 or 5 concurrent requests and then it dropped a little and flattened out.
Can anyone explain why this is happening, how I can figure out what the bottleneck is, or how I can improve this? I've done some searching and haven't found any answers.