I've set up load balancing MySQL slaves using HAProxy via a xinetd. 2 load balancers shared a virtual IP that is managed by Pacemaker:
crm configure show
:
node SVR120-27148.localdomain
node SVR255-53192.localdomain
primitive failover-ip ocf:heartbeat:IPaddr2 \
params ip="192.168.5.9" cidr_netmask="32" \
op monitor interval="5s" \
meta is-managed="true"
primitive haproxy ocf:heartbeat:haproxy \
params conffile="/etc/haproxy/haproxy.cfg" \
op monitor interval="30s" \
meta is-managed="true"
colocation haproxy-with-failover-ip inf: haproxy failover-ip
order haproxy-after-failover-ip inf: failover-ip haproxy
property $id="cib-bootstrap-options" \
dc-version="1.0.12-unknown" \
cluster-infrastructure="openais" \
no-quorum-policy="ignore" \
expected-quorum-votes="2" \
stonith-enabled="false" \
last-lrm-refresh="1342783084"
/etc/haproxy/haproxy.cfg
:
global
log 127.0.0.1 local1 debug
maxconn 4096
pidfile /var/run/haproxy.pid
daemon
defaults
log global
mode tcp
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend FE_mysql
bind 192.168.5.9:3307
default_backend BE_mysql
backend BE_mysql
mode tcp
balance roundrobin
option tcpka
option httpchk
#server mysql1 192.168.6.47:3306 weight 1 check port 9199 inter 12000 rise 3 fall 3
server mysql2 192.168.6.248:3306 weight 1 check port 9199 inter 12000 rise 3 fall 3
server mysql3 192.168.6.129:3306 weight 1 check port 9199 inter 12000 rise 3 fall 3
My problem is most of time connecting via virtual IP, /var/log/mysqld.log
keeps flooding with:
120719 12:59:46 [Warning] Aborted connection 17237 to db: 'db' user: 'user' host: '192.168.5.192' (Got an error
reading communication packets)
120719 12:59:49 [Warning] Aborted connection 17242 to db: 'db' user: 'user' host: '192.168.5.192' (Got an error
reading communication packets)
120719 12:59:52 [Warning] Aborted connection 17248 to db: 'db' user: 'user' host: '192.168.5.192' (Got an error
reading communication packets)
(connection still established)
192.168.5.192
is the HAProxy's IP address.
mysql> show global status like 'Aborted%';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| Aborted_clients | 53626 |
| Aborted_connects | 400 |
+------------------+-------+
I don't think 128M is not enough for max_allowed_packet
:
max_connections = 300
max_allowed_packet = 128M
_timeout
variables:
mysql> show global variables like '%timeout';
+----------------------------+----------+
| Variable_name | Value |
+----------------------------+----------+
| connect_timeout | 10 |
| delayed_insert_timeout | 300 |
| innodb_lock_wait_timeout | 60 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 3600 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| slave_net_timeout | 3600 |
| wait_timeout | 600 |
+----------------------------+----------+
Is there anything that can cause this? Does it relate to HAProxy?
Any thoughts?
These are the reasons given in MySQL docs:
And, this explains better:
I found that increasing the timeout settings in the haproxy.cfg file solved this error for me. I spent a lot of time checking the my.cnf wait_timeout etc and realised the bottleneck was actually HAProxy.
check haproxy mannul
I set
tune.idletimer=60000
and restart haproxy service. and the problem happpen again. I meet the problem in haproxy 1.8.14the old haproxy 1.5.4 is OK.
For me the major setting I had to use/change was:
I suspect some clients for performance purposes maintain socket connections to mysql -- in particular PHP so the only workaround is to allow them to maintain such connections for longer. I found that 600s disappeared the errors and settled with that value.