Server #1 is a MySQL database server running on Debian which contains numerous tables and one particular table that is over 100GB.
Server #2 is used as a MySQL slave for replication, but now it needs to be reset and the replication reinitialized due to issues that arose.
There is not presently enough room on the hard drive of server #1 to do a full database dump (i.e. less than 100GB of free space). Aside from upgrading the hardware which would require downtime, what would be the best way to get the database dump from server #1 to server #2 intact, without corruption, and without filling up the hard drive on the server #1 in the process?
Quick and dirty way (Starting from Server #1):
You can do this without an intermediary file, as well as resetting the replication pointers in the process, so you don't miss any updates (and have to resync again)
Stop replication the slave
slave> mysql 'slave stop;'
dump the master to the slave, using the
--master-data=1
flagmaster> mysqldump -e --master-data=1 --single-transaction $DATABASE | ssh -C user@slave 'mysql $DATABASE'
start replication on the slave
slave> mysql 'slave start'
--master-data=1
causes mysqldump to emit theCHANGE MASTER TO ...
settings at the top of the dump to set the replication binlog and offset to the exact point in the masters binlog at the time the dump was taken-e
uses the extended output format, basically multiple sets of value per insert statement, which is more efficent both on the wire, and when being applied to the slave.--single-transation
tells mysql to open a transaction over the whole dump, rather than usingLOCK TABLES
.You can dump mysql databases from a remote host, just use the --host or -h argument with mysqldump
You could obviously skip the dump to disk, but imports tend to be slower than dumping. If your CPU on server2 is a bottleneck and your disk is fast then you may want to skip the gzip step, so you minimise the downtime on your master server.
Obviously, my answer skips the details to do with recording replication details and making sure you have a consistent dump for replication, as these are dealt with in the MySQL manual.
Obviously NFS, if there is no firewall between servers. If you have a firewall, you may need to reconfigure it to allow some additional traffic for NFS to work.
Similar solution, a bit more complicated, would be to use smbmount on server1 and smbd on server2.
If you don't want to mess with firewall (or you do not want send unencrypted data between servers), I would recommend sshfs.
If you're using strictly myisam tables, shutdown mysql and cp the data files from server1 to server2. You can also keep mysql up and put in read-only with global read lock and then cp the tables over.
Editing for clarity & adding:
You can also mysqldump the tables on server1 to stdout and pipe to mysql connected to server2. But with 100gb table that might take some time & would still require you to put it read-only if you want any consistency.
Your best bet, if replication is not an option and the local server does not have enough disk space, is to suffer the downtime and do a full sync from the live database files.
Depending on the speed of the network connection, that will probably result in less downtime than installing a new disk.
Alternatively, consider adding a temporary USB Mass Storage Device to the server and use that to grab a full database dump. That should result in zero downtime.
If the boxes are within reasonable physical distance to eachother, you could attach a usb-drive and do your databasedump there.
Another alternative could be ssh/pipes if this runs on a nix-box, although depending on what backend you're using for storage, the table lock could be painful for the large table if trying to leave the mysqlprocess running.
From mysqlbox2, something like: