s3tools / s3cmd

Official s3cmd repo -- Command line tool for managing S3 compatible storage services (including Amazon S3 and CloudFront).
https://s3tools.org/s3cmd
GNU General Public License v2.0
4.54k stars 905 forks source link

"WARNING: Retrying failed request" when listing objects in a bucket #314

Closed matteobar closed 4 years ago

matteobar commented 10 years ago

When listing all the objects in a S3 bucket, the following WARNING is shown every time:

WARNING: Retrying failed request: /?marker=100PENTX/IMGP0125.JPG () WARNING: Waiting 3 sec...

In the end the listing does succeed, but this warning is given every time. Looks like it's something to do with a failed "next marker" request, the bucket contains more than 1000 files. After initial failure, it always succeeds.

mdomsch commented 10 years ago

which version please, and where is the bucket? Is this transient (e.g. it occurs only occasionally), or every time you list this bucket?

Thanks, Matt

On Fri, Apr 18, 2014 at 3:32 AM, matteobar notifications@github.com wrote:

When listing all the objects in a S3 bucket, the following WARNING is shown every time:

WARNING: Retrying failed request: /?marker=100PENTX/IMGP0125.JPG () WARNING: Waiting 3 sec...

In the end the listing does succeed, but this warning is given every time. Looks like it's something to do with a failed "next marker" request, the bucket contains more than 1000 files. After initial failure, it always succeeds.

— Reply to this email directly or view it on GitHubhttps://github.com/s3tools/s3cmd/issues/314 .

matteobar commented 10 years ago

Latest s3cmd version, downloaded from master github branch just today. OS is Linux CentOS. Bucket is in the US (East I believe). Command used is : s3cmd ls s3://bucketname -r (recursive). It happens every time the command is issued so it should be reproducible. It always fails with WARNING once but then it's OK. I can't give info on the bucket for security reasons, but I will try to create a new bucket with public accessible content and see if I can recreate this issue in a new bucket. thanks

mdomsch commented 10 years ago

OK, thanks. I haven't been able to reproduce this in my own buckets (US-east-1) which have several thousand objects each.

On Fri, Apr 18, 2014 at 8:19 AM, matteobar notifications@github.com wrote:

Latest s3cmd version, downloaded from master github branch just today. OS is Linux CentOS. Bucket is in the US (East I believe). Command used is : s3cmd ls s3://bucketname -r (recursive). It happens every time the command is issued so it should be reproducible. It always fails with WARNING once but then it's OK. I can't give info on the bucket for security reasons, but I will try to create a new bucket with public accessible content and see if I can recreate this issue in a new bucket. thanks

— Reply to this email directly or view it on GitHubhttps://github.com/s3tools/s3cmd/issues/314#issuecomment-40807738 .

matteobar commented 10 years ago

Please try: s3cmd ls s3://s3cmdtestmarker or s3cmd du s3://s3cmdtestmarker

I constantly get :

WARNING: Retrying failed request: /?marker=New%20Bitmap%20Image%20-%20Copy%20(528)%20-%20Copy%20-%20Copy.bmp&delimiter=/ () WARNING: Waiting 3 sec...

and then it works fine after that. This happens always for me, every time I issue a listing command to fetch more than 1000 objects. I am running CentOS from within the Windows Azure infrastructure, so all the requests are within the USA. Strange.

kyleboon commented 10 years ago

This is also happening for me using 1.5.0-beta1

matteobar commented 10 years ago

Hi kyleboon, do you get the same s3cmd warning using the same bucket s3://s3cmdtestmarker or some other bucket? thanks

rodrigolc commented 10 years ago

Have been getting this same error. Finally, had to switch to aws-cli. Funny enough, it only happened when it was run via a cron-job. It ended up filling our inboxes pretty quickly (about 6 times a day, at least).

The weird thing was that it failed without a message for the exception. The parenthesis should contain the exception message, but none is present.

Anyway, we're probably gonna stick to aws-cli, but if you guys need any help, I'm here.

kyleboon commented 10 years ago

@matteobar sorry I could have been more clear. I'm seeing this warning when using s3cmd to sync my octopress blog with s3.

Here's an example of the output:

WARNING: Retrying failed request: /?marker=logs/2013-07-24-00-34-11-B45A917F52429C42 ('')
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /?marker=logs/2013-07-29-18-22-15-FF20AE4C1307B61A ('')
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /?marker=logs/2013-08-03-21-20-50-BC25B564614B6053 ('')
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /?marker=logs/2013-08-14-08-25-42-7F526DEB9A50EAED ('')
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /?marker=logs/2013-08-20-00-27-45-0CBE760C8002A59B ('')
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /?marker=logs/2013-08-26-01-34-03-1C40A40DA326DB01 ('')
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /?marker=logs/2013-09-05-13-41-44-8AEB0CA004C0ED52 ('')
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /?marker=logs/2013-09-10-21-28-57-CA7B2989ED7D9F8F ('')
WARNING: Waiting 3 sec...

Here's the command I'm using (from a rake task):

 ok_failed system("s3cmd sync -P public/* s3://#{s3_bucket}/ --mime-type='text/html; charset=utf-8' --add-header 'Content-Encoding: gzip' --exclude '*.*' --include '*.html'")
  # sync non gzipped, non js/css/image files
  ok_failed system("s3cmd sync --guess-mime-type -P public/* s3://#{s3_bucket}/ --exclude 'images/' --exclude '*.css' --exclude '*.js' --exclude '*.html'")
  # sync gzipped css and js
  ok_failed system("s3cmd sync --guess-mime-type -P public/* s3://#{s3_bucket}/ --add-header 'Content-Encoding: gzip' --add-header 'Cache-Control: public, max-age=31600000' --exclude '*.*' --include '*.js' --include '*.css'")
  # sync all images
  ok_failed system("s3cmd sync --guess-mime-type -P --add-header 'Cache-Control: public, max-age=31600000' public/images/* s3://#{s3_bucket}/images/")
kyleboon commented 10 years ago

I tried

s3cmd ls s3://s3cmdtestmarker 

as well and see the exceptions there too.

mdomsch commented 10 years ago

Interesting. I run the s3cmd ls s3://s3cmdtestmarker (Fedora 20 system) and do not see any "Retrying failed request" messages. Can you run with --debug and post the results somewhere? What kind of system are you on? What python version?

Separately, why are you using Content-Encoding: gzip ? Not that it matters in this case, but that value is to be set by a web server that itself does the gzip compression on a file for transmission, so the receiving server knows to ungzip it before writing it to disk. s3cmd no longer tries to gzip content before transmission, or to detect that a file was already gzipped on disk. That bug was fixed a few weeks ago.

Thanks, Matt

On Tue, Apr 22, 2014 at 9:15 AM, Kyle Boon notifications@github.com wrote:

I tried

s3cmd ls s3://s3cmdtestmarker

as well and see the exceptions there too.

— Reply to this email directly or view it on GitHubhttps://github.com/s3tools/s3cmd/issues/314#issuecomment-41044431 .

matteobar commented 10 years ago

For me, it happens both under CentOS 6.5 (Python 2.6.6) and Windows 7 (Python 2.7.6). Here is the output on CentOS with --debug flag used (I have truncated the irrelevant parts see "...truncated...")

DEBUG: ConfigParser: Reading file '/home/matteo/.s3cfg' DEBUG: ConfigParser: access_key->AK...17_chars...Q DEBUG: ConfigParser: access_token-> DEBUG: ConfigParser: add_encoding_exts-> DEBUG: ConfigParser: add_headers-> DEBUG: ConfigParser: bucket_location->US DEBUG: ConfigParser: cache_file-> DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com DEBUG: ConfigParser: default_mime_type->binary/octet-stream DEBUG: ConfigParser: delay_updates->False DEBUG: ConfigParser: delete_after->False DEBUG: ConfigParser: delete_after_fetch->False DEBUG: ConfigParser: delete_removed->False DEBUG: ConfigParser: dry_run->False DEBUG: ConfigParser: enable_multipart->True DEBUG: ConfigParser: encoding->UTF-8 DEBUG: ConfigParser: encrypt->False DEBUG: ConfigParser: expiry_date-> DEBUG: ConfigParser: expiry_days-> DEBUG: ConfigParser: expiry_prefix-> DEBUG: ConfigParser: follow_symlinks->False DEBUG: ConfigParser: force->False DEBUG: ConfigParser: get_continue->False DEBUG: ConfigParser: gpg_command->/usr/bin/gpg DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s DEBUG: ConfigParser: gpg_passphrase->12...3_chars...6 DEBUG: ConfigParser: guess_mime_type->True DEBUG: ConfigParser: host_base->s3.amazonaws.com DEBUG: ConfigParser: host_bucket->%(bucket)s.s3.amazonaws.com DEBUG: ConfigParser: human_readable_sizes->False DEBUG: ConfigParser: ignore_failed_copy->False DEBUG: ConfigParser: invalidate_default_index_on_cf->False DEBUG: ConfigParser: invalidate_default_index_root_on_cf->True DEBUG: ConfigParser: invalidate_on_cf->False DEBUG: ConfigParser: list_md5->False DEBUG: ConfigParser: log_target_prefix-> DEBUG: ConfigParser: max_delete->-1 DEBUG: ConfigParser: mime_type-> DEBUG: ConfigParser: multipart_chunk_size_mb->15 DEBUG: ConfigParser: preserve_attrs->True DEBUG: ConfigParser: progress_meter->True DEBUG: ConfigParser: proxy_host-> DEBUG: ConfigParser: proxy_port->0 DEBUG: ConfigParser: put_continue->False DEBUG: ConfigParser: recursive->False DEBUG: ConfigParser: recv_chunk->4096 DEBUG: ConfigParser: reduced_redundancy->False DEBUG: ConfigParser: restore_days->1 DEBUG: ConfigParser: secret_key->ac...37_chars...P DEBUG: ConfigParser: send_chunk->4096 DEBUG: ConfigParser: server_side_encryption->False DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com DEBUG: ConfigParser: skip_existing->False DEBUG: ConfigParser: socket_timeout->6300 DEBUG: ConfigParser: urlencoding_mode->normal DEBUG: ConfigParser: use_https->True DEBUG: ConfigParser: use_mime_magic->True DEBUG: ConfigParser: verbosity->WARNING DEBUG: ConfigParser: website_endpoint->http://%(bucket)s.s3-website-%(location)s.amazonaws.com/ DEBUG: ConfigParser: website_error-> DEBUG: ConfigParser: website_index->index.html DEBUG: Updating Config.Config cache_file -> DEBUG: Updating Config.Config encoding -> UTF-8 DEBUG: Updating Config.Config follow_symlinks -> False DEBUG: Updating Config.Config verbosity -> 10 DEBUG: Unicodising 'ls' using UTF-8 DEBUG: Unicodising 's3://s3cmdtestmarker' using UTF-8 DEBUG: Command: ls DEBUG: Bucket 's3://s3cmdtestmarker': DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Wed, 23 Apr 2014 00:25:41 +0000\n/s3cmdtestmarker/' DEBUG: CreateRequest: resource[uri]=/ DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Wed, 23 Apr 2014 00:25:41 +0000\n/s3cmdtestmarker/' DEBUG: Processing request, please wait... DEBUG: get_hostname(s3cmdtestmarker): s3cmdtestmarker.s3.amazonaws.com DEBUG: ConnMan.get(): creating new connection: https://s3cmdtestmarker.s3.amazonaws.com DEBUG: format_uri(): /?delimiter=/ DEBUG: Sending request method_string='GET', uri='/?delimiter=/', headers={'content-length': '0', 'Authorization': 'AWS AKIAJHTJDKCPCLKAQENQ:SmXUQcuukU48/RhVtmFPyiUb/go=', 'x-amz-date': 'Wed, 23 Apr 2014 00:25:41 +0000'}, body=(0 bytes) DEBUG: Response: {'status': 200, 'headers': {'x-amz-id-2': 'cOZD45+ZYH6DV1Rem5TukghJSGFAJXWuE4bxUh8ZBXgtj0+xboYPbZSfamky92Fi', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'x-amz-request-id': '84442972E9122035', 'date': 'Wed, 23 Apr 2014 00:25:33 GMT', 'content-type': 'application/xml'}, 'reason': 'OK', 'data': '<?xml version="1.0" encoding="UTF-8"?>\ns3cmdtestmarkerNew Bitmap Image - Copy (528) - Copy - Copy.bmp1000/trueNew Bitmap Image - Copy (10) - Copy - Copy.bmp2014-04-19T00:27:27.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (10) - Copy.bmp2014-04-19T00:27:29.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (10).bmp2014-04-19T00:27:30.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (100) - Copy - Copy.bmp2014-04-19T00:27:31.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (100) - Copy.bmp2014-04-19T00:27:42.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (100).bmp2014-04-19T00:27:43.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (101) - Copy - Copy.bmp2014-04-19T00:27:43.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (101) - Copy.bmp2014-04-19T00:27:43.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (101).bmp2014-04-19T00:27:43.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (102) - Copy - Copy.bmp2014-04-19T00:27:43.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (102) - Copy.bmp2014-04-19T00:27:43.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (103) - Copy - Copy.bmp2014-04-19T00:27:43.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (103) - Copy.bmp2014-04-19T00:27:43.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (104) - Copy - Copy.bmp2014-04-19T00:27:43.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (104) - Copy.bmp2014-04-19T00:27:44.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (105) - Copy - Copy.bmp2014-04-19T00:27:44.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9c...truncated....'} DEBUG: ConnMan.put(): connection put back to pool (https://s3cmdtestmarker.s3.amazonaws.com#1) DEBUG: String 'New Bitmap Image - Copy (528) - Copy - Copy.bmp' encoded to 'New%20Bitmap%20Image%20-%20Copy%20(528)%20-%20Copy%20-%20Copy.bmp' DEBUG: Listing continues after 'New%20Bitmap%20Image%20-%20Copy%20(528)%20-%20Copy%20-%20Copy.bmp' DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Wed, 23 Apr 2014 00:25:43 +0000\n/s3cmdtestmarker/' DEBUG: CreateRequest: resource[uri]=/ DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Wed, 23 Apr 2014 00:25:43 +0000\n/s3cmdtestmarker/' DEBUG: Processing request, please wait... DEBUG: get_hostname(s3cmdtestmarker): s3cmdtestmarker.s3.amazonaws.com DEBUG: ConnMan.get(): re-using connection: https://s3cmdtestmarker.s3.amazonaws.com#1 DEBUG: format_uri(): /?marker=New%20Bitmap%20Image%20-%20Copy%20(528)%20-%20Copy%20-%20Copy.bmp&delimiter=/ DEBUG: Sending request method_string='GET', uri='/?marker=New%20Bitmap%20Image%20-%20Copy%20(528)%20-%20Copy%20-%20Copy.bmp&delimiter=/', headers={'content-length': '0', 'Authorization': 'AWS AKIAJHTJDKCPCLKAQENQ:OZPTNnd9G7e7/5xN8sS0cQJQW6M=', 'x-amz-date': 'Wed, 23 Apr 2014 00:25:43 +0000'}, body=(0 bytes) WARNING: Retrying failed request: /?marker=New%20Bitmap%20Image%20-%20Copy%20(528)%20-%20Copy%20-%20Copy.bmp&delimiter=/ () WARNING: Waiting 3 sec... DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Wed, 23 Apr 2014 00:25:46 +0000\n/s3cmdtestmarker/' DEBUG: Processing request, please wait... DEBUG: get_hostname(s3cmdtestmarker): s3cmdtestmarker.s3.amazonaws.com DEBUG: ConnMan.get(): creating new connection: https://s3cmdtestmarker.s3.amazonaws.com DEBUG: format_uri(): /?marker=New%20Bitmap%20Image%20-%20Copy%20(528)%20-%20Copy%20-%20Copy.bmp&delimiter=/ DEBUG: Sending request method_string='GET', uri='/?marker=New%20Bitmap%20Image%20-%20Copy%20(528)%20-%20Copy%20-%20Copy.bmp&delimiter=/', headers={'content-length': '0', 'Authorization': 'AWS AKIAJHTJDKCPCLKAQENQ:en5YJiaA8e7Id0z3+RE4M69hXT4=', 'x-amz-date': 'Wed, 23 Apr 2014 00:25:46 +0000'}, body=(0 bytes) DEBUG: Response: {'status': 200, 'headers': {'x-amz-id-2': 'jGHBJiSvvpLIkiQD0UCk/wUOvihQGJ/cd/O7FnhHKZmQK+PwCpKUIqsgYw/HTvbO', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'x-amz-request-id': '4B916B1158744EA4', 'date': 'Wed, 23 Apr 2014 00:25:38 GMT', 'content-type': 'application/xml'}, 'reason': 'OK', 'data': '<?xml version="1.0" encoding="UTF-8"?>\ns3cmdtestmarkerNew Bitmap Image - Copy (528) - Copy - Copy.bmp1000/falseNew Bitmap Image - Copy (528) - Copy.bmp2014-04-19T00:30:01.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (529) - Copy - Copy.bmp2014-04-19T00:30:02.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (529) - Copy.bmp2014-04-19T00:30:02.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (53) - Copy - Copy.bmp2014-04-19T00:30:02.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (53) - Copy.bmp2014-04-19T00:30:02.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (53).bmp2014-04-19T00:30:02.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (530) - Copy - Copy.bmp2014-04-19T00:30:02.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (530) - Copy.bmp2014-04-19T00:30:02.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (531) - Copy - Copy.bmp2014-04-19T00:30:02.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (531) - Copy.bmp2014-04-19T00:30:03.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (532) - Copy - Copy.bmp2014-04-19T00:30:03.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (532) - Copy.bmp2014-04-19T00:30:03.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (533) - Copy - Copy.bmp2014-04-19T00:30:03.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (533) - Copy.bmp2014-04-19T00:30:03.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (534) - Copy - Copy.bmp2014-04-19T00:30:03.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (534) - Copy.bmp2014-04-19T00:30:03.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (535) - Copy - Copy.bmp2014-04-19T00:30:03.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (535) - Copy.bmp2014-04-19T00:30:03.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (536) - Copy - Copy.bmp2014-04-19T00:30:04.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (536) - Copy.bmp2014-04-19T00:30:04.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (537) - Copy - Copy.bmp2014-04-19T00:30:04.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (537) - Copy.bmp2014-04-19T00:30:04.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (538) - Copy - Copy.bmp2014-04-19T00:30:04.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (538) - Copy.bmp2014-04-19T00:30:04.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (539) - Copy - Copy.bmp2014-04-19T00:30:05.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (539) - Copy.bmp2014-04-19T00:30:05.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (54) - Copy - Copy.bmp2014-04-19T00:30:05.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (54) - Copy.bmp2014-04-19T00:30:05.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (54).bmp2014-04-19T00:30:05.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (540) - Copy - Copy.bmp2014-04-19T00:30:05.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (540) - Copy.bmp2014-04-19T00:30:05.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (541) - Copy - Copy.bmp2014-04-19T00:30:05.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTANDARDNew Bitmap Image - Copy (541) - Copy.bmp2014-04-19T00:30:06.000Z"d41d8cd98f00b204e9800998ecf8427e"0f30e7668e18b4cb1a866765fe9700a0d779f02c80f9cf06304afaffecad04d75infoSTAN...truncated...ANDARD'} DEBUG: ConnMan.put(): connection put back to pool (https://s3cmdtestmarker.s3.amazonaws.com#1) 2014-04-19 00:27 0 s3://s3cmdtestmarker/New Bitmap Image - Copy (10) - Copy - Copy.bmp 2014-04-19 00:27 0 s3://s3cmdtestmarker/New Bitmap Image - Copy (10) - Copy.bmp 2014-04-19 00:27 0 s3://s3cmdtestmarker/New Bitmap Image - Copy (10).bmp 2014-04-19 00:27 0 s3://s3cmdtestmarker/New Bitmap Image - Copy (100) - Copy - Copy.bmp 2014-04-19 00:27 0 s3://s3cmdtestmarker/New Bitmap Image - Copy (100) - Copy.bmp ... truncated .... 2014-04-19 00:31 0 s3://s3cmdtestmarker/New Bitmap Image - Copy - Copy.bmp 2014-04-19 00:31 0 s3://s3cmdtestmarker/New Bitmap Image - Copy.bmp 2014-04-19 00:31 0 s3://s3cmdtestmarker/New Bitmap Image.bmp

kyleboon commented 10 years ago

I'm on OSX 10.9.2 running python 2.7.5.

Oddly enough tonight its not giving me those errors for the s3cmdtestmarker bucket.

I am copying gzipped files themselves to amazone s3 because I'm using it to host a static web site. It won't do the compression but it will set the content-encoding so the browser knows the decompress the files.

matteobar commented 10 years ago

@mdomsch I can try debugging this, if you can point me to the approximate location / module where this failure is likely happening ...

mdomsch commented 10 years ago

On Tue, Apr 22, 2014 at 9:24 PM, matteobar notifications@github.com wrote:

@mdomsch https://github.com/mdomsch I can try debugging this, if you can point me to the approximate location / module where this failure is likely happening ...

Yes please.

s3cmd: line 146 subcmd_bucket_list() calls S3/S3.py line 241 bucket_list() calls S3/S3.py line 276 bucket_list_noparse() calls send_request S3/S3.py: line 847 inside send_request() prints the message after getting an Exception that we aren't specifically handling.

mdomsch commented 10 years ago

I thought this might be the socket_timeout being set to low in ~/.s3cfg, but @matteobar 's debug shows his socket timeout is set very large. If others reporting a problem here could check their socket_timeout is at least 300, and if not, see if that resolves the problem. Thanks.

msravi commented 10 years ago

I'm on OS X 10.6.8 running s3cmd version 1.5.0-beta1 and Python 2.7.4, and have the exact same issue. I have socket_timeout set to 300, and increasing it to 500 did not help.

Thanks, Ravi

lisuml commented 10 years ago

Any update on this?

chrishein commented 10 years ago

I'm having the same issue. Ubuntu 13.10 and s3cmd version 1.5.0-beta1 socket_timeout = 300

brafales commented 10 years ago

Same thing is happening to us on a CentOS 64 bits and the s2cmd version 1.5.0-rc1. Buckets on Ireland.

FyrbyAdditive commented 10 years ago

I have this issue with the same config as brafales above on an Ireland bucket.

shihpeng commented 10 years ago

+1 same issue, WARNING: Retrying failed request: /?marker= occurs while I am trying to ls a folder contains about 100k files.

vamitrou commented 10 years ago

cannot reproduce that on my own buckets and s3://s3cmdtestmarker/ doesn't seem to be accessible anymore. could you guys give access to one of your buckets?

vamitrou commented 10 years ago

also, out of curiosity, what is the default locale on your systems?

vamitrou commented 10 years ago

please put this inside except: (S3/S3.py::867)

import traceback; traceback.print_exc()

chrishein commented 10 years ago

Well, I can't reproduce it now working in the same environment and against the same bucket. Could this be related to something like https://forums.aws.amazon.com/message.jspa?messageID=327304 ?

S3cmd output looks consistent with going through https://github.com/s3tools/s3cmd/blob/master/S3/S3.py#L882

Thanks @vamitrou for taking a look.

ironictoo commented 9 years ago

Any updates on this? I am having the same issues. It started when I switched from version 1.0.0 to 1.5.0-rc1, but that could be a coincidence. Redhat 5, 64bit, python 2.4.3

dannyman commented 9 years ago

Seeing it here too on 1.5.0-rc1

If I remove the remote file then I get the error on a different file.

jamieburchell commented 9 years ago

Just wanted to say that I've always encountered this warning without any message in the parenthesis. I also always get the broken pipe message. It happens every time with every version of s3cmd I've tried over the years to the point I actually thought this was normal until I read this. The process always succeeds, but the log file ends up huge. My bucket is huge and the platform is a Linux/ARM TS-212 NAS.

allella commented 9 years ago

FWIW, there are a few potential fixes/theories posted at http://stackoverflow.com/questions/5774808/s3cmd-failed-too-many-times

jamieburchell commented 9 years ago

Thanks but in my case it's files of any size, they always need to be "retried" the first time an upload starts and they always get uploaded in the end (no fail)

hrchu commented 9 years ago

+1, since multipart upload works, this is not an big problem for me.

jamieburchell commented 9 years ago

I've posted a sample log file showing the retries, the broken pipe and the eventual successful upload. This process seems to always happen regardless of the file being uploaded. It always uploads the second time round.

http://pastebin.com/raw.php?i=j3nQgNaR

tharple commented 9 years ago

Since installing the latest version today, (To use stdin functionality) Comparing: s3cmd --version: s3cmd version 1.1.0-beta3 to s3cmd version 1.5.0 ls without bucket name both pass. ls with bucket name: 1.1.0-beta never fails. 1.5.0 always fails observed debug difference(format_uri):

1.1.0-beta3:

DEBUG: format_uri(): /?delimiter=/ DEBUG: Sending request method_string='GET', uri='/?delimiter=/', headers={'content-length': '0', 'Authorization': 'AWS 7d798fd9af074 834bca0db006f152c85:1QbELjS7SLmBonw0g3ahbptpXPA=', 'x-amz-date': 'Thu, 22 Jan 2015 20:05:39 +0000'}, body=(0 bytes) DEBUG: Response: {'status': 200, 'headers': {'content-length': '307', 'expires': 'Thu, 01 Jan 1970 00:00:00 GMT', 'server': 'Verizon -Himalaya/3.5.1-9e1b68cffea7', 'pragma': 'no-cache', 'cache-control': 'no-cache', 'date': 'Thu, 22 Jan 2015 20:05:37 GMT', 'content- type': 'application/xml;charset=UTF-8'}, 'reason': 'OK', 'data': '<?xml version="1.0" encoding="UTF-8"?>tharple-11000/</Delimiter

falsescalar/'}

1.5.0:

DEBUG: format_uri(): /tharple-1/?delimiter=/ DEBUG: Sending request method_string='GET', uri='/tharple-1/?delimiter=/', headers={'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'x-amz-date': '20150122T195452Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=7d798fd9af074834bca0db006f152c85/20150122/US/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=c4424a556b5a77a84479aeecd0ef34fdc247c8a7301a79802a7cdc53f73426e1'}, body=(0 bytes) DEBUG: Response: {'status': 500, 'headers': {'content-length': '194', 'dav': '1,3', 'server': 'Verizon-Himalaya/3.5.1-9e1b68cffea7', 'connection': 'close', 'date': 'Thu, 22 Jan 2015 19:55:37 GMT', 'content-type': 'application/xml'}, 'reason': 'Internal Server Error', 'data': '<?xml version="1.0" encoding="UTF-8"?>\nInternalErrorWe encountered an internal error. Please try again.'}

Hope this helps Terry

allella commented 9 years ago

Thanks Terry. Is that your AWS key and secret in the debugging? If so, you should probably invalidate that key so some evildoer doesn't find this and access your bucket.

mdomsch commented 9 years ago

Verizon Cloud Storage (aka Himalaya) is throwing an HTTP 500 error. For now, maybe try with --signature-v2 because it clearly doesn't like something about the v4 signed requests. This is also the first instance I've heard of someone using s3cmd against Verizon Cloud Storage - glad to know it has worked in the past.

Sounds like we need to get a test matrix built that includes all of the S3 clones (Swift, Walrus, DreamObjects, now Verizon Cloud Storage, fakes3 I think I've used it with Google Cloud too...) I definitely don't test against all of them myself.

On Thu, Jan 22, 2015 at 2:56 PM, Jim Ciallella notifications@github.com wrote:

Thanks Terry. Is that your AWS key and secret in the debugging? If so, you should probably invalidate that key so some evildoer doesn't find this and access your bucket.

— Reply to this email directly or view it on GitHub https://github.com/s3tools/s3cmd/issues/314#issuecomment-71096894.

tharple commented 9 years ago

Thanks allella Already removed

mdomsch commented 9 years ago

Note: DNS resolution for host-style addressing of buckets does not work on buckets with names that contain a '.' (period). The same is true for SSL certificate checking.

http://cloud.verizon.com/documentation/Buckets.htm

This means that we can't use s3cmd 1.5.0 against Verizon Cloud if your bucket has a '.' in its name. They are not handling wildcard NS or subdomain A records in their DNS, so they won't resolve these. s3cmd uses https://./path/to/object in its URLs, which won't resolve. It doesn't use https:////path/to/object anymore.

On Fri, Jan 23, 2015 at 10:17 AM, tharple notifications@github.com wrote:

Thanks allella Already removed

— Reply to this email directly or view it on GitHub https://github.com/s3tools/s3cmd/issues/314#issuecomment-71217935.

jackal242 commented 9 years ago

Any updates on this bug? I'm having the same problem.

I'm trying to run "s3cmd ls --recursive s3://mybucket/90210/" and I'm getting a gazillion of these:

WARNING: Retrying failed request: /?marker=90210/lalala/FILESTUFF1 () WARNING: Waiting 3 sec... WARNING: Retrying failed request: /?marker=90210/lalala/FILESTUFF2 () WARNING: Waiting 3 sec...

There are over a 1000 items in this list.

foresto commented 9 years ago

To anyone still struggling with this bug: I ran into it with s3tools version version 1.5.0~rc1 (which is the version in the Ubuntu 15.04 repositories) but it went away when I updated to version 1.5.2.

jamieburchell commented 9 years ago

I was all excited to see that, until I realised I'm already using 1.5.2 and still have a log full of them ;(

fviard commented 9 years ago

@jamieburchell: If you are consistently able to reproduce the issue. Could it be possible for you to run the 1.5.2 with the "--debug" command line option, and then post the log? (take care to remove any personal sensitive data) Thanks

fviard commented 9 years ago

@jamieburchell : Hum, could you do a similar log, but with the last upstream version of s3cmd? because in 1.5.2, there should already be the fixes that are described by @mdomsch. Thanks

jamieburchell commented 9 years ago

Please ignore my previous link to that debug - it was for a different issue. Take a look here (search for "retry" in the text)

http://jamieburchell.com/backup2s3.zip

Cheers

jamieburchell commented 9 years ago

That is a debug log from the 1.5.2 tag.

fviard commented 9 years ago

@jamieburchell Thanks for the log, that is really interesting. A first thing that I noticed that could (or could not) be related to the issue is that your files are stored with the storage class "GLACIER". That probably could have an impact.

If you can try a few things, I think that we can narrow down the issue. (It would be better if you can use the last master of s3cmd github but, it is still ok with the 1.5.2) I would like that you run again your test in debug mode, but please tweak the following in your s3cmd code:

In S3/S3.py, in the following function: def send_request(self, request, retries = _max_retries): You will see the following line: warning("Retrying failed request: %s (%s)" % (resource['uri'], e)) That would be great if you can add the following code just after: ''' from logging import exception as log_exc log_exc("WHY do I want to retry:") '''

jamieburchell commented 9 years ago

I'll have a look in a bit. I have lifecycle rules setup to archive music to Glacier. This also happens on files that are not archived though.

jamieburchell commented 9 years ago

This extract from the master version:

ERROR: WHY do I want to retry:
Traceback (most recent call last):
  File "build/bdist.linux-armv5tel/egg/S3/S3.py", line 1014, in send_request
    http_response = conn.c.getresponse()
  File "/share/MD0_DATA/.qpkg/Python/lib/python2.7/httplib.py", line 1025, in getresponse
    response.begin()
  File "/share/MD0_DATA/.qpkg/Python/lib/python2.7/httplib.py", line 401, in begin
    version, status, reason = self._read_status()
  File "/share/MD0_DATA/.qpkg/Python/lib/python2.7/httplib.py", line 365, in _read_status
    raise BadStatusLine(line)
BadStatusLine: ''
jamieburchell commented 9 years ago

@fviard Any ideas?

fviard commented 9 years ago

@jamieburchell Sorry for the delay of the reply, it is more clear now, but i'm not sure that we can do something about this. https://github.com/boto/boto/issues/1934 It looks like that somehow, when reusing connection in the case of keep alive, amazon send bad HTTP reply, like with extra spaces or things like that. httplib is unable to handle that and so fail.

But it is strange that we almost never encounter it whereas you face it often. So maybe, you can try to update httplib or the python of your device because they have improved something in the lib. Otherwise, maybe we can retry with a shorter delay than 3s in such a case as the new connection will certainly fix the transfer for the request that encountered this issue.

jamieburchell commented 9 years ago

Hey

I'm not sure if I can update it, it's the version of Python that's available for the NAS and I think I looked into that before.

Cheers Jamie