I want to transfer a mysql dump, compressed, to s3.
I tried:
mysqldump -u root -ppassword -all-databases | gzip -9 | s3cmd put s3://bucket/sql/databases.sql.gz
but then I get:
ERROR: Not enough paramters for command 'put'
How can I do this (in one line)?
This is possible with
s3cmd
1.5+ (link):$ mysqldump ... | s3cmd put - s3://bucket/file-name.sql
This appear to now be possible. With s3cmd v1.6.1:
You are missing the actual file you want to backup to start.
s3cmd takes two basic arguments, the file, and the bucket to backup too.
Secondly, I can't take credit for the following, but its basically doing what you want with an intermediate script. Basically, create a bak.sh file with the following, and then that shell script will be runnable via bash. (Credit: http://www.wong101.com/tech-cloud/configure-s3cmd-cron-automated-mysql-backup)
You could try using
-
to indicate to s3cmd that it should read stdin for the source parameter, it may work. Failing that you could do it with an intermediate stepThis will compress the output to tmp.file and if successful (
&&
) it will put the file to s3 and then if that was successful it will delete the temporary file.Create mysql.sh for mysqldump store direct to s3 via pipline No local files will be stored.
Create Cronjob for daily backup