I have a MySQL (actually, MariaDB 5.2) setup deployed on a Rackspacecloud 4Gb node. All tables are InnoDB, InnoDB buffer pool size is 2G and it is 95% full most of the time. I use ext3 as a filesystem (not sure if I can use anything else on a cloud node). My application does a lot of writes to two statistics tables. Both tables have an auto_increment PK and an unique index. These tables are cleaned every 10 mins by a cron job. This cron job also does a number of long-running queries to process raw stats.
The problem is that I get high I/O spikes from time to time. It looks like they are related to a number of uncheckpointed bytes in the InnoDB transaction log. The average number of uncheckpointed bytes is 18.6M but it gets to 80-90M when I see these spikes. Not sure what causes what here.
Iotop shows that kjournald is the top performer when these spikes take place. I remounted the FS with "data=writeback" option but to no avail, kjournald was still on the top. I also tried to decrease swappiness and to increase queue/nr_requests for the root device, but that did not help either.
I am not sure what to do next. I wonder why kjournald generates such a huge I/O load at all on a FS with only metadata being journaled. Should I try to tune a commit interval? Or probably to use ionice?
It's close to EXT3 default journal size. Looks like it gets full so then it starts to flush it. The answer is highly dependent on how much you care about this filesystem consistency, cause it either gives you plenty of choices, or pretty a few of.
Also, if those statistics tables are "cleaned every 10 mins" why not having'em in memory at all?