I'm considering setting up replication of our mysql db to be able to have local slaves in each of our branch offices, while having the master in the main office to improve application performance (significantly) at our branch offices.
The db itself isn't that large (<1gb) but I'm wondering; considering 200-300 record updates/min tops: how fast is replication? (assuming, first, a 5mb generic dsl connection, faster if necessary - trying to keep costs as low as possible but the money is there for more)
Are whole tables replicated in batches? Is the replication done, on demand, as each record in a table is updated (from the docs, I think I'm seeing that it's configurable)?
Notes:
- I'm thinking 1 master, 2 slaves (2 branch offices for now) setup as in the docs here except that it's an app, not a web client
- Any update done on the master needs to replicate to the other slaves in <10 mins.
- All of this assumes that I can get our ORM (DevExpress XPO) happy with the concept of reading from the slave and writing to the master.
MySQL replication happens as close to real-time as possible, as limited by disk and network I/O. The slaves open a socket to the master, which is kept open. When a transaction occurs on the master, it gets recorded in the binlog, and is simply replayed on the slave(s). If the socket between master and slave is interrupted, the binlog is replayed for the slave upon the next successful connection.
Multi-master replication does the same thing, but in both directions.
Some basic calculations will assist you in making a better determination of your bandwidth needs.
Average transaction size * number of slaves * updates/minute = bandwidth needed
Hope this helps.
Replication in MySQL is pretty quick to get the data to the slave (quicker than you'll be able to run the
UPDATE
on the master, and switch to another window to run aSELECT
on the slave, if (and only if) the network connections are all up and everything's running OK. Any DSL-class connection should be fine for the general case of your regular small queries, but large insert/update queries can take a little while to copy, and re-synchronisation in the event of a replication stuffup (and MySQL is viciously prone to those, unfortunately) will take a while (copying your whole database from the master again). There are tricks to limiting the impact of resynchronisation on your master, like putting your MySQL on LVM so you can do a very quick lock/snapshot and rsync the snapshot contents to the slave, but ultimately a resync is going to suck.Replication on the slaves side is handled by two independent threads.
Replication latency is limited by IO, firstly the IO on the slave database to apply the transactions from the relay log (which may involve complex SQL queries) and secondly by the IO on the master to read it's binlog and transmit it to each slave.
MySQL replication increases read query capacity but does not increase query write performance, which is gated at the speed IO's can be flushed to the binlog on both the master, and the slave