This is my current test setup:
- SQL0: 192.168.22.64 (master+slave)
- SQL1: 192.168.22.65 (master+slave)
- loadbalancer0: 192.168.22.66
- loadbalancer1: 192.168.22.67
- Virtual IP: 192.168.22.80
The Loadbalancers are connected to eachother using heartbeat, and both of them run MySQL-Proxy.
Everything works like a charm; one of the SQL server dies, everything stays up. One of the LBs die, everything stays up.
However, when checking which SQL server is being used, I noticed it's always the same, unless that one becomes unavailable.
After a couple of hours SQL1 for example has 600.000 queries, whereas SQL0 has, say, 20.000.
Is there a way to balance it more, to come near ~50% SQL0, and ~50% SQL1?
This is my basic line I start the proxy with:
/usr/bin/mysql-proxy --proxy-skip-profiling -b 0.sql.domain.com:3306 -b 1.sql.domain.com:3306 --keepalive -P :3306
As proof, here a test run every 10 mins since yesterday afternoon.
The MySQL proxy is connection based. You mentioned queries in your question, to use that as an example if you have one box that opens up one connection and does 100,000 queries over it, they will all go to the same back-end server. You can verify this by looking at the source code:
It is just grabbing the next connection off the pool. If you look at number of TCP connections for the ODBC you are using they should come out to 50/50.
You could look at increasing the number of db connections in your ODBC configuration for your application in hopes of spreading the query load that way.