What's the most appropriate way to backup Mysql databases? I would like to have the backups in at least 1 hour intervals. I though mysqldump was the only option but It turns out there are other options too, like binary logs.
I would like to know how "big guys do it"
My tables are innodb would like to have backups as often as possible (at least every hour)
I gues even 1 hour intervals is not a perfect solution, since I would still use the changes after last backup, up to 1 hour.
I believe mysqldump is best used on servers where you're not going to hold up any traffic. On myisam tables, this creates a read lock. I know you said you're using innodb though, so there are a few solutions out there.
I would start by checking this tool out. http://www.percona.com/software/percona-xtrabackup/ Percona guys are considered some of the experts on mysql and this is one of their tools for doing a 'hot backup' of innodb tables. This means you can run it even while your database is in use. I don't personally have any experience with it yet.
The site I'm currently working on has hundreds of concurrent users and in order to pull off reliable backups, we first have to set up replication to a slave. Then we run mysqldump on the slave, which does not disturb performance on the master server. Read more about replication here: http://dev.mysql.com/doc/refman/5.5/en/replication.html
I believe the sense is that you want to run backups about once a day using a very reliable method like mysqldump. In between you have a master and slave replicating and producing binary logs. The binary logs can be used to do minor catchup. In my experience hot backup tools tend to block one operation or another, leading to some poor user having a bad experience.
To a degree it depends on what restore options you want. Some methods work (mostly) but greatly limit your restore capabilities. e.g. A file system level backup only allows you to restore to the same machine or a clone. Using binary logs for backup purposes is fraught with risk, as the binary log doesn't tell the whole story.
Using mysqldump on a replicant is the best option in the vast majority of cases. It not only allows you to backup without disturbing the master but if you configure master/master instead of master/slave you also add redundancy. If you need to take the master off-line for any reason just point your application to the second system first.
In conjonction with a mysqldump backup, you may replicate data to a MySQL Slave where the datas are stored on a filesystem with a snapshot feature such as ZFS. In that case, you stop the slave, take a snapshot and restart the slave.
The basic database backup scheme would include the following components:
The idea behind this is that in case of a disk (or disk array) failure of the database disk, you will be able to restore your data up to the last transaction with your previously taken backup and a replay of the latest transaction logs which were left unaffected by the failure.
On how to do that, a similar question has been asked before and got a reference to the MySQL documentation which describes some options.
I would certainly strongly recommend against replication as a backup "strategy" - it is not designed as a backup. Some people use the replica to reduce the load and locking concurrency on the primary database server by using a slave's database as the backup source. Yet, you are still left with the problem of consistency-checking of the slave's data set - MySQL does not do it for you and it is easy to screw up MySQL replication by misconfiguration. There are tools for consistency checking like mk-table-checksum from the maatkit, but then again little is won - consistency checking would put load and locks on your master's tables.