It used to be my favorite backup transport agent but now I frequently get this result from s3cmd on the very same Ubuntu server/network:
root@server:/home/backups# s3cmd put bkup.tgz s3://mybucket/
bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1]
36864 of 2711541519 0% in 1s 20.95 kB/s failed
WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=0.00)
WARNING: Waiting 3 sec...
bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1]
36864 of 2711541519 0% in 1s 23.96 kB/s failed
WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=0.01)
WARNING: Waiting 6 sec...
bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1]
28672 of 2711541519 0% in 1s 18.71 kB/s failed
WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=0.05)
WARNING: Waiting 9 sec...
bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1]
28672 of 2711541519 0% in 1s 18.86 kB/s failed
WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=0.25)
WARNING: Waiting 12 sec...
bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1]
28672 of 2711541519 0% in 1s 15.79 kB/s failed
WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=1.25)
WARNING: Waiting 15 sec...
bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1]
12288 of 2711541519 0% in 2s 4.78 kB/s failed
ERROR: Upload of 'bkup.tgz' failed too many times. Skipping that file.
This happens even for files as small as 100MB, so I suppose it's not a size issue. It also happens when I use put with --acl-private flag (s3cmd version 1.0.1)
I appreciate if you suggest some solution or a lightweight alternative to s3cmd.
This helped in my case:
s3cmd ls
on the bucketbucket_host
in the.s3cfg
file with the one from the warning.s3cmd ls
, it should no longer print a warningmy .s3cfg now is:
There are a few common problems that result in s3cmd returning the error you mention:
Alternatives to s3cmd:
If you wish to write your own script, you can use the Python Boto library which has functions for performing most AWS operations and has many examples available online. There is a project which exposes some of the boto functions on the command line - although, a very small set of functions are currently available.
I had the same problem with the Ubuntu
s3cmd
command.Downloading the latest stable version (1.0.1) solved it: http://sourceforge.net/projects/s3tools/files/s3cmd/
After having tried all the things above, I noticed I'm still having the throttling issue using s3cmd put, but not using s3cmd sync instead. Hope this might be useful to somebody for a quick fix :)
I had the same problem and found a solution here in response by samwise.
This problem appeared when I started experiments with IAM. In my case the problem was in ARN. I listed
arn:aws:s3:::bucketname
instead ofarn:aws:s3:::bucketname/*
That's why I had no problems with $ s3cmd ls s://bucketname, but could not upload any file there((
I had every second upload of multi-part upload with
s3cmd sync
fail with this error:The next upload would work great, but then one failed again, and so on.
I got it working with
--limit-rate=
option set to4m
so that uploads are throttled to at most 4MB/s.So full setting is
This is also a commonly caused by HTTPS settings of your .s3cfg file.
Try changing the configuration parameter from "use_https = False" to "use_https = True" in the .s3cfg
Remember amazon buckets redirect to Https and hence all the retries. I see this issue quite a bit in the field.