I have a mysql (MyISAM) database with size around 30Gb. It has single large table (and some smaller ones but it makes no difference). It performs around 2000 selects (single row by primary key)/second, 300 updates/second and 50 inserts/second. We have around 10 independent servers with similar database.
I want to backup that database (incremental preferred) without taking it offline, how to do that?
You can have a configure a slave server and use MySQL dump there, if you want to use standard MySQL utilities for backing up.
Another way is to use Linux LVM or Solaris/FreeBSD ZFS snapshots, copy the snapshot with whatever command you want, and then release the snapshot.
There's also Zmanda MySQL Recovery Manager (Community version) ZManda MySQL Recovery Manager (Enterprise version) available.
I use this script mylvmbackup on a 250+Gb DB among others. Works very well, after taking the snapshot it executes a parallel mysql to verify that snapshot works.
If it's running on a central storage machine, you can leverage the snapshots of that machine. Typically, this means putting the database into backup mode so it flushes all the writes to disk and then starts caching in memory, taking a snapshot with your storage, then putting the database back into normal mode. You can then mount that snapshot on a backup server and do whatever you want to it, including an incremental backup.
Also, this is often used when you have a database that is heavily queried at the end of the month or quarter, like a financial application. Instead of using a snapshot, you could clone that consistent snapshot to other disks, and have all the reporting (which can drive production performance down) redirected to other storage resources.