I need some help with analyzing a log from Apache Bench:
Benchmarking texteli.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software:
Server Hostname: texteli.com
Server Port: 80
Document Path: /4f84b59c557eb79321000dfa
Document Length: 13400 bytes
Concurrency Level: 200
Time taken for tests: 37.030 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 13524000 bytes
HTML transferred: 13400000 bytes
Requests per second: 27.01 [#/sec] (mean)
Time per request: 7406.024 [ms] (mean)
Time per request: 37.030 [ms] (mean, across all concurrent requests)
Transfer rate: 356.66 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 27 37 19.5 34 319
Processing: 80 6273 1673.7 6907 8987
Waiting: 47 3436 2085.2 3345 8856
Total: 115 6310 1675.8 6940 9022
Percentage of the requests served within a certain time (ms)
50% 6940
66% 6968
75% 6988
80% 7007
90% 7025
95% 7078
98% 8410
99% 8876
100% 9022 (longest request)
What this results can tell me? Isn't 27 rps too slow?
When running load tests, picking an arbitrary number and hitting your server is generally not a good way to go. All you've proven is that your server can handle 200 concurrent visitors as long as they don't mind waiting ~7s for their request to load. What you PROBABLY want to do is:
Once you have your results, graph them: number of visitors versus average request times, including max and min bars. Basically, load testing of an arbitrary application is only as useful as relevant tests; in this case, for example, if it takes 1 visitor 6s to load a page, then 7s a page for 200 visitors doesn't sound to bad, does it?
you can start by setting a startup number of requests and number of concurrent requests and check the results as follows :
then you can scale them up by increasing the number of concurrent connections till you reach close to the expected number of users and watch the response of your service and repeat it for number of times and check the time variation and take the average