I have a 1 TB MySQL database that I'd like to dump and reload. Most of the data is in one table. A lot of data has been deleted, so I'm pretty sure if I dump it with mysql, rebuild the database, then reload it the total size will be smaller.
I'm dumping the data with this command:
mysqldump -uroot -pXXX mydb | gzip -c > data.sql.gz
I get this error
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `MY_TABLE` at row: 596089342
I've tried many variations, including increasing the packet size, doing single translation and going over TCP/IP instead of local socket.
mysqldump -uroot -pXXX -h 127.0.0.1 --max-allowed-packet=1024M --single-transaction mydb | gzip -c > data.sql.gz
Finally, I even ran the command going to /dev/null to make sure it wasn't gzip. All permutations produce the same error.
mysqldump -uroot -pXXX -h 127.0.0.1 mydb > /dev/null
Here's some of the settings in my.cnf
max_allowed_packet = 1G
interactive_timeout = 600
wait_timeout = 600
net_read_timeout=600
net_write_timeout=600
connect_timeout=600
One other odd thing is that the dump always stops on the same place. Approximately 6GB of gzipped data and at approximately the same record. When I do ls -l the file size is always identical.
I'm stumped. Any suggestions for next steps?
For the record, this is Mysql 5.1.58 running on Ubuntu 11.10
WILL
In the end, it looks like I had a data corruption. I had copied two volumes (EC2 ebs) that were linked with LVM. I may not have frozen the volume properly when making the copy and I suspect they were not properly synchronized. I started from my original volumes, ran the procedure again, carefully freezing the xfs volume before taking the EC2 snapshot, then loading the copies onto my new server and it worked fine.
Have you tried using the socket file, to bypass the TCP/IP layer e.g.
I got the same error, the server had no more ram available and no swap. Mysqldump was eating all ram, so i decreased memory-related parameters in my.ini as worked