Can anyone advice what is going on with my setup. I have mysql server on ubuntu which periodically produce very high iowait. for long time up to 10-20 minutes. During this time system and database almost not available. Sit that using this database just hangs. I checked vmstat during this period and it shows numbers like below
r b swpd free buff cache si so bi bo in cs us sy id wa
1 22 0 34712 8260 583416 0 0 660 935 76 99 6 2 84 6
0 25 0 34560 8280 582932 0 0 42360 27008 2304 1804 9 3 0 84
0 29 0 34560 8320 583676 0 0 41160 21524 2360 1763 4 4 0 92
3 20 0 35912 8328 581532 0 0 12940 6856 766 764 1 0 0 99
1 30 0 34512 8348 581804 0 0 4532 3748 925 1373 4 4 0 92
so iowait is large. I am guessing MYSQL which has 4 gb configured for innodb pool and has database of size around 6-8 gb is swapping. During this time when I used df I saw that root drive almost full it was shown 95%. When I restarted mysql in couple minutes it restarted and all came back to norm. And space on root drive (10gb) came back to 25%. I am running mysql on ebs device on amazon ec2.
What are my options? Box is 8gb large instance of ubuntu 10.4.
I will appreciate any help as I was googling and trying to solve it for couple weeks already. Thanks
That's a lot of disk I/O.
You might want to check using iotop what is generating the I/O (backups?). The mysql processlist may give you (us?) further clues. Also, try running mysqltuner.pl against the dbms.
I had a similar issue on my Cloud provider (not amazon)... I made some benchmark to check the disk performance with sysbench
The Gold Cloud was using a SAN Storage...
After that, I migrated the servers to what they call silver cloud, that uses shared SAS disks.
My mysql backup before and after:
Thanks, guys, looks like with more memory problem disappeared almost, only once a day short IOWAIT. I guess answer is more memory. – user330026 just now edit