The mysql service is not being pushed to the background after startup.
I start the service with sudo systemctl start mysql.service
it starts up. I can connect send queries. But the the command stays on my console and doesn't go into the background. I have it running on two servers both are running ubuntu 20 and the same version of percona xtradb cluster 8.0.26.
I'm building a HA cluster with CentOS 6
I have n number of database nodes (using percona xtradb cluster), and when backup process starts, I change the "wsrep_desync" to ON, this way locks caused by the backup tool doesn't affect to the other nodes. This is OK, but galera load balancer is still sending queries to that server.
There is any way to change galera load balancer configuration on application servers, to ignore the node while wsrep_desync is ON?
Thank you!
I have a three node Percona XtraDB Cluster (5.5) setup.
Every night, we shut down MySQL on one randomly selected node in order to take backups of the data directory.
When our traffic is reasonably busy, this causes a couple (2-4) error alerts along the lines of SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry '140577' for key 'PRIMARY'
. Primary key conflict, obviously, except we're using auto_increment
columns as the primary key on these tables. As each node has an offset assigned by the cluster engine, this shouldn't be occurring.
My suspicion is that taking the node out of the cluster causes the other two nodes to change their auto increment offsets, in a way that's causing them to potentially conflict while the change takes place. I'm at a loss as to why this wouldn't be an atomic action as well as how I might fix it.
Has anyone encountered this? Is there a way to temporarily freeze the auto increment settings in the cluster so they don't shuffle around during the backup process or some other solution I'm not thinking of?
I have been reviewing XtraDB clustering and produced a P.o.C. environment on Openstack using 4 instances, which has fallen over during my resilience testing.
Per the pxc documentation: http://www.percona.com/doc/percona-xtradb-cluster/howtos/virt_sandbox.html which covers a 3 node install I opted for a 4th.
- Initial setup complete data loading tests passed, with all nodes being updates synchronously using a 1.6GB test sql file to load a database.
- Failiure and restore of nodes commenced, this test entailed stoping the mysql service on a node, creating and subsequently dropping a database to test surviving node replication, and starting of the downed node to resync.
- This worked fine for nodes 4,3,2.
- Node1 which per the pxc documents is essentially a controller, would not rejoin the cluster.
So my questions are as follows:
- How to return a controller node to service if surviving nodes have since had data written to them
- Using 4 nodes as a reference, is there a way to remove this single point of failure in node1? (if a surviving node restarts with the controller (node1) down/out of sync, that node will also fail).
I'm looking for some instructions/manual to set Percona Xtradb cluster with Xtrabackup for SST. Is there any configuration file where I need to provide login details for Xtrabackup script?
Thanks