Many months ago, while MySql 5.5 was still in beta this question was asked before. Now that it has been released and there are a few maintenance updates out, what do all y'all think?
A master with multiple slaves isn't all that great because only the first slave would replicate quickly and all subsequent slaves would sometimes switch into asynch mode depending on how long a query takes to run. In terms of the first slave, that is by design. I guess there are no guarantees for subsequent slaves running in a timely fashion.
On the other hand, simple master-slave, circular rep, and multimaster rep work dynamite fast IMHO.
ISSUE #2) Multiple InnoDB Buffer Pools
I have a client with 3 DB Server Configured as Follows:
Dual HexaCore (12 CPUs)
192GB RAM
2 TB SAS RAID10
Client has 480GB of data spread across 780 databases.
Circular Replication is running amongst the three DB servers.
Definitely one of the most powerful features I have activated for InnoDB is the following:
MySQL 5.5 works like a dream for this client. All Client's Customer Data are Virtually Cached in Memory. All 12 CPUs are fully engaged due to heavy InnoDB activity. Everything dynamite fast. InnoDB Dirty Pages for the Buffer Pool are so low and paged out so quickly.
I did try setting the buffer pool this way at first (160GB)
This created a lot of thread locking. I suspect this is due to having related data cached in different buffer pools. Trying to access those segments of data resulted in some kind of mutexing amongst buffer pools. A coworker at my job (who is an Oracle Certified Master) told me that Oracle has a similar infrastructure you can use and thread locking is also a problem so they do not market its usage.
All the client's thread locking issues totally disappeared when I set the following:
I was looking forward to using this feature but came up disappointed. Having multiple buffer pools do not buy you a whole lot if you have large datasets that are related but spread across the buffer pools, introducing increased thread locking. I wish you could assign data to specific buffer pools just like you could preload MyISAM key caches. Only then would multiple buffer pools be very handy.
If you have very small tenant databases but lots of them, multiple buffer pools are OK. There would still be thread locking issues, but not as severe.
IMHO I would shy away from multiple buffer pools for large installations. A giant buffer pool with a lot of threads work more effectively. I would definitely focus on using more innodb threading and engaging more CPUs, which is a distinct advantage over MySQL 5.0/5.1.
Re: issues with innodb_buffer_pool_size and innodb_buffer_pool_instances:
I think there is a disconnect with how these variables are related. innodb_buffer_pool_size is the total buffer pool available. That is then divided amongst the # of threads referenced in innodb_buffer_pool_instances.
In the poorly-performing example above, 2.5GB was divided amongst 64 instances, which leaves 40MB per instance. The mimimum recommended (if you're using these options) is 1GB per instance. I could totally see how 40MB per instance would've resulted in poor performance.
In our production environment, we're running servers with 3 instances, 3GB pool, for 1GB per instance, without excessive paging. We're also running 4 instances, 10GB pool, also with no excessive paging (newer hardware, why not take advantage of that memory).
ISSUE #1) Semisynchronous Replication
A master with multiple slaves isn't all that great because only the first slave would replicate quickly and all subsequent slaves would sometimes switch into asynch mode depending on how long a query takes to run. In terms of the first slave, that is by design. I guess there are no guarantees for subsequent slaves running in a timely fashion.
On the other hand, simple master-slave, circular rep, and multimaster rep work dynamite fast IMHO.
ISSUE #2) Multiple InnoDB Buffer Pools
I have a client with 3 DB Server Configured as Follows:
Dual HexaCore (12 CPUs)
192GB RAM
2 TB SAS RAID10
Client has 480GB of data spread across 780 databases.
Circular Replication is running amongst the three DB servers.
Definitely one of the most powerful features I have activated for InnoDB is the following:
innodb_read_io_threads=64
innodb_write_io_threads=64
innodb_io_capacity=65536
innodb_buffer_pool_instances=1
(default) innodb_bufer_pool_size=162G
MySQL 5.5 works like a dream for this client. All Client's Customer Data are Virtually Cached in Memory. All 12 CPUs are fully engaged due to heavy InnoDB activity. Everything dynamite fast. InnoDB Dirty Pages for the Buffer Pool are so low and paged out so quickly.
I did try setting the buffer pool this way at first (160GB)
innodb_buffer_pool_instances=64 (maximum value)
innodb_buffer_pool_size=2560M (2.5 GB)
This created a lot of thread locking. I suspect this is due to having related data cached in different buffer pools. Trying to access those segments of data resulted in some kind of mutexing amongst buffer pools. A coworker at my job (who is an Oracle Certified Master) told me that Oracle has a similar infrastructure you can use and thread locking is also a problem so they do not market its usage.
All the client's thread locking issues totally disappeared when I set the following:
innodb_buffer_pool_instances=1
(default) innodb_bufer_pool_size=162G
My Conclusion on Multiple InnoDB Buffer Pools
I was looking forward to using this feature but came up disappointed. Having multiple buffer pools do not buy you a whole lot if you have large datasets that are related but spread across the buffer pools, introducing increased thread locking. I wish you could assign data to specific buffer pools just like you could preload MyISAM key caches. Only then would multiple buffer pools be very handy.
If you have very small tenant databases but lots of them, multiple buffer pools are OK. There would still be thread locking issues, but not as severe.
IMHO I would shy away from multiple buffer pools for large installations. A giant buffer pool with a lot of threads work more effectively. I would definitely focus on using more innodb threading and engaging more CPUs, which is a distinct advantage over MySQL 5.0/5.1.
Re: issues with innodb_buffer_pool_size and innodb_buffer_pool_instances:
I think there is a disconnect with how these variables are related. innodb_buffer_pool_size is the total buffer pool available. That is then divided amongst the # of threads referenced in innodb_buffer_pool_instances.
In the poorly-performing example above, 2.5GB was divided amongst 64 instances, which leaves 40MB per instance. The mimimum recommended (if you're using these options) is 1GB per instance. I could totally see how 40MB per instance would've resulted in poor performance.
In our production environment, we're running servers with 3 instances, 3GB pool, for 1GB per instance, without excessive paging. We're also running 4 instances, 10GB pool, also with no excessive paging (newer hardware, why not take advantage of that memory).
Hope that helps.