I am preparing for setting up a MySQL master-slave or master-master relationship. Right now I have a single MySQL production server, and of course I don't want a lot of downtime while I connect the slave.
Is there no way I can make connect an empty slave and let it "slowly" synchronize data from the master, until they are identical?
I have noticed that I can take a transactional dump with mysqldump on the master and then import that into the slave, but by the time the slave has imported the dump, a lot of new rows will have been written and how will the slave get these?
I hope I am missing something obvious here, but extensive googling gives such advice such as "since this will result in less downtime in the future, some downtime now maybe isn't such a bad thing". But I would really like to avoid that.
Related question for MyISAM, but I use InnoDB.
If you're using 100% InnoDB then you're in luck. You can use XtraBackup to make a full backup of your master database without any downtime or any table locking. This will be a consistent snapshot-style backup, the same as the sort that you get when you do the
FLUSH TABLES WITH READ LOCK
or--master-data
options.The XtraBackup tool also drops an extra file in the backup directory that contains the MASTER_LOG_POS and MASTER_LOG_FILE information you need to start replication on the slave.
Once you're done backing up, you will need to run XtraBackup's
--prepare
option on the backup, load it into the slave, start the slave MySQL process back up and tell it the new MASTER_LOG_POS and MASTER_LOG_FILE values it needs.You will want
skip-slave-start
in your my.cnf before you start the slave up.Also bear in mind that the
mysql
schema is MyISAM by default (and if memory serves correctly it can only be MyISAM) so you will still have to be careful not to make any changes to any of those tables while running the backup. As long as you stick to that rule, the master information will still be correct.It's often a good idea to ignore the
mysql
schema in your my.cnf on the slave and only ever create users with SELECT privileges. Inconsistent and out-of-sync slaves are hard to detect and a pain to deal with, even when using the tools that Percona (and Maatkit before them) provide for this.Edit:
Although you said you are using InnoDB, for completeness there is another way if you are using MyISAM tables. If you have a volume manager with snapshotting (such as ZFS or LVM), you can run a
FLUSH TABLES WITH READ LOCK
followed by aSHOW MASTER STATUS
, create a snapshot and runUNLOCK TABLES
. The downtime should be fairly minimal. For comparison, the cron job last night that did this to backup one of our databases took 6 seconds to create the snapshot which is the bit where the database is "down" and 27 minutes to copy the files from the snapshot to the backup server.It is not possible to start a mysql replication "from scratch".
Instead, you could use mysqldump with the
--master-data=2
parameter when dumping your databases - i.e. like this:This command will lock all tables within the affected databases during the dump and write the replication coordinates into the dump's header in a commented-out fashion like this:
After your import is complete, you can manually run the "change master to" command spiced with the host name of your master and the authentication data - the replication will lift off at the point of the dump.
Please keep in mind that the mysqldump process will incur "downtime" in itself due to locking contention - it places read locks on all tables (similar to what a FLUSH TABLES WITH READ LOCK command would do) for the entire duration of the dump. Thus, no requests writing to any of the tables will return unless the dump is complete. Also, the write requests probably also would block any subsequend read requests unless you've specified low_priority_updates in your MySQL configuration or have issued SET GLOBAL LOW_PRIORITY_UPDATES=1 before the dump.