I'm benchmarking Apache/2.2.3 (prefork) using ab
and siege
with the following command:
ab -kc 200 -t 120 http://www.mywebsite.com/test.php
siege -c200 -t2M http://www.mywebsite.com/test.php
test.php is a very simple file which just creates one mysql connection and then closes
<?php
$link = mysql_connect("localhost", "username", "password");
mysql_select_db("dbname");
if(!$link) {
die('Could not connect: ' . mysql_error());
}
echo 'Connected successfully';
mysql_close($link);
?>
The results I get have a lot of failed requests in them. I'm trying to figure out how to go about reducing the number of these failed requests since this is a fairly simple script, server load is quite low, I shouldn't have a problem on a Quad-Core Xeon 3Ghz machine with 8G of RAM.
Output from siege
Transactions: 9438 hits
Availability: 98.33 %
Elapsed time: 119.39 secs
Data transferred: 0.38 MB
Response time: 1.31 secs
Transaction rate: 79.05 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 103.37
Successful transactions 9438
Failed transactions: 160
Longest transaction: 21.24
Shortest transaction 0.21
Output from ab:
Benchmarking www.mywebsite.com (be patient)
Server Software: Apache/2.2.3
Server Hostname: www.mywebsite.com
Server Port: 80
Document Path: /test.php
Document Length: 22 bytes
Concurrency Level: 200
Time taken for tests: 35.851520 seconds
Complete requests: 50000
Failed requests: 618
(Connect: 0, Length: 618, Exceptions: 0)
Write errors: 0
Keep-Alive requests: 49600
Total transferred: 12932098 bytes
HTML transferred: 1149345 bytes
Requests per second: 1394.64 [#/sec] (mean)
Time per request: 143.406 [ms] (mean)
Time per request: 0.717 [ms] (mean, across all concurrent requests)
Transfer rate: 352.26 [Kbytes/sec] received
A quick highlight of my Apache Configuration
Timeout 300
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 15
<IfModule prefork.c>
StartServers 20
MinSpareServers 20
MaxSpareServers 50
ServerLimit 500
#default 200
MaxClients 500
MaxRequestsPerChild 4000
</IfModule>
I ran the same test with the same script on another weaker server, and it had zero failed requests and finished tests faster. So I wonder what's wrong on this one.
Updated MySQL configuration:
Variables
mysql> show variables LIKE '%connect%';
+--------------------------+-------------------+
| Variable_name | Value |
+--------------------------+-------------------+
| character_set_connection | latin1 |
| collation_connection | latin1_swedish_ci |
| connect_timeout | 10 |
| init_connect | |
| max_connect_errors | 10 |
| max_connections | 100 |
| max_user_connections | 0 |
+--------------------------+-------------------+
Global Status
mysql> SHOW GLOBAL STATUS LIKE '%connect%';
+--------------------------+---------+
| Variable_name | Value |
+--------------------------+---------+
| Aborted_connects | 343 |
| Connections | 1463797 |
| Max_used_connections | 101 |
| Ssl_client_connects | 0 |
| Ssl_connect_renegotiates | 0 |
| Ssl_finished_connects | 0 |
| Threads_connected | 3 |
+--------------------------+---------+
Your concurrency limit is definitely on the MySQL side, although I'm not sure this is necessarily a bad thing for real-world performance. You've got MySQL to accept 100 simultaneous connections, so at most you can have only 100 Apache instances talking to it at once. Since your test script is so simple, it's going to spend the majority of the time it's active connected to or at least connecting to MySQL. Add a bit more for Apache processes that are in other states, and you get your concurrency in siege of 100. I'm not sure exactly why ab gets a higher concurrency level of 200, but perhaps it counts things differently.
If you want to get your benchmark numbers higher, just set your connection limits higher for MySQL. The MySQL conn limit should probably be at least equal to the # of Apache processes for something that spends the vast majority of the time talking to the DB.
Unfortunately with ab you are benchmarking your client performance really. You need some better performance tool like httperf which is not killing the client host. If you would like to run real testing you should use more than 1 host for it or at some case siege also a good tool. Just check what the ab is doing in fact, it might struggle with open file limits. Also good to check you server configuration.
http://httpd.apache.org/docs/2.0/misc/descriptors.html
Perhaps there's something strange going on at MySQL's side. One thing that can easily happen during that kind of rapid "create a db connection - close it" test is that mysql_max_connection_errors fills up, by default it's only 10.
And another thing: have you already checked that maximum connection limits are equal between your weaker (but working) and this faster (and not working) server? Maybe the default value of 100 simultaneous MySQL connections fills up.