s3tools / s3cmd

Official s3cmd repo -- Command line tool for managing S3 compatible storage services (including Amazon S3 and CloudFront).
https://s3tools.org/s3cmd
GNU General Public License v2.0
4.61k stars 906 forks source link

ERROR could not be copied: 413 (Request Entity Too Large) with s3cmd 1.5.2 and (?)python 2.6.6(?) #510

Closed JoarJ closed 9 years ago

JoarJ commented 9 years ago

I'm getting ERROR: File s3://joar4/bashrc could not be copied: 413 (Request Entity Too Large): when trying to sync between buckets .

It works fine on my workstation, but I get the bellow error when doing the same thing on CentOS / RHEL 6 . My workstation ( CentOS 7 ) got python 2.7.5, the servers got python 2.6.6, I suspect some python 2.6.6 issues (?)

NOTE: I'm using riak-cs, someone with an amazon account should see if they're able to reproduce.

joar@neptune:~$ s3cmd/s3cmd --version s3cmd version 1.5.2 joar@neptune:~$ s3cmd/s3cmd ls [...] 2015-03-26 11:56 s3://joar4 2015-03-26 11:57 s3://joar5 joar@neptune:~$ s3cmd/s3cmd ls s3://joar4 joar@neptune:~$ s3cmd/s3cmd ls s3://joar5 joar@neptune:~$ s3cmd/s3cmd put .bashrc s3://joar4/bashrc WARNING: Module python-magic is not available. Guessing MIME types based on file extensions. .bashrc -> s3://joar4/bashrc [1 of 1] 3716 of 3716 100% in 0s 49.37 kB/s done joar@neptune:~$ s3cmd/s3cmd sync s3://joar4/ s3://joar5/ Summary: 1 source files to copy, 0 files at destination to delete ERROR: File s3://joar4/bashrc could not be copied: 413 (Request Entity Too Large): Done. Copied 1 files in 1.0 seconds, 1.00 files/s joar@neptune:~$ uname -a Linux neptune.cosmicb.no 2.6.32-504.8.1.el6.x86_64 #1 SMP Wed Jan 28 21:11:36 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux joar@neptune:~$ cat /etc/redhat-release CentOS release 6.6 (Final) joar@neptune:~$ python -V Python 2.6.6 joar@neptune:~$

mdomsch commented 9 years ago

the error response came from riak-cs. Perhaps it has a different size limit for single object transfers? Given we'd switch to multipart at 15MB by default, that would seem pretty odd. What does running with --debug return? (be sure to strip out any access keys from the result).

Thanks, Matt

On Thu, Mar 26, 2015 at 7:03 AM, Joar Jegleim notifications@github.com wrote:

I'm getting ERROR: File s3://joar4/bashrc could not be copied: 413 (Request Entity Too Large): when trying to sync between buckets .

It works fine on my workstation, but I get the bellow error when doing the same thing on CentOS / RHEL 6 . My workstation ( CentOS 7 ) got python 2.7.5, the servers got python 2.6.6, I suspect some python 2.6.6 issues (?)

NOTE: I'm using riak-cs, someone with an amazon account should see if they're able to reproduce.

joar@neptune:~$ s3cmd/s3cmd --version s3cmd version 1.5.2 joar@neptune:~$ s3cmd/s3cmd ls [...] 2015-03-26 11:56 s3://joar4 2015-03-26 11:57 s3://joar5 joar@neptune:~$ s3cmd/s3cmd ls s3://joar4 joar@neptune:~$ s3cmd/s3cmd ls s3://joar5 joar@neptune:~$ s3cmd/s3cmd put .bashrc s3://joar4/bashrc WARNING: Module python-magic is not available. Guessing MIME types based on file extensions. .bashrc -> s3://joar4/bashrc [1 of 1] 3716 of 3716 100% in 0s 49.37 kB/s done joar@neptune:~$ s3cmd/s3cmd sync s3://joar4/ s3://joar5/ Summary: 1 source files to copy, 0 files at destination to delete ERROR: File s3://joar4/bashrc could not be copied: 413 (Request Entity Too Large): Done. Copied 1 files in 1.0 seconds, 1.00 files/s joar@neptune:~$ uname -a Linux neptune.cosmicb.no 2.6.32-504.8.1.el6.x86_64 #1 https://github.com/s3tools/s3cmd/pull/1 SMP Wed Jan 28 21:11:36 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux joar@neptune:~$ cat /etc/redhat-release CentOS release 6.6 (Final) joar@neptune:~$ python -V Python 2.6.6 joar@neptune:~$

— Reply to this email directly or view it on GitHub https://github.com/s3tools/s3cmd/issues/510.

JoarJ commented 9 years ago

joar@neptune:~$ s3cmd/s3cmd -d sync s3://joar4/ s3://joar5/ DEBUG: ConfigParser: Reading file '/home/joar/.s3cfg' DEBUG: ConfigParser: access_key->[hidden] DEBUG: ConfigParser: access_token-> DEBUG: ConfigParser: add_encoding_exts-> DEBUG: ConfigParser: add_headers-> DEBUG: ConfigParser: bucket_location->US DEBUG: ConfigParser: cache_file-> DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com DEBUG: ConfigParser: default_mime_type->binary/octet-stream DEBUG: ConfigParser: delay_updates->False DEBUG: ConfigParser: delete_after->False DEBUG: ConfigParser: delete_after_fetch->False DEBUG: ConfigParser: delete_removed->False DEBUG: ConfigParser: dry_run->False DEBUG: ConfigParser: enable_multipart->True DEBUG: ConfigParser: encoding->UTF-8 DEBUG: ConfigParser: encrypt->False DEBUG: ConfigParser: expiry_date-> DEBUG: ConfigParser: expiry_days-> DEBUG: ConfigParser: expiry_prefix-> DEBUG: ConfigParser: follow_symlinks->False DEBUG: ConfigParser: force->False DEBUG: ConfigParser: get_continue->False DEBUG: ConfigParser: gpg_command->/usr/bin/gpg DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s DEBUG: ConfigParser: gpg_passphrase->...-3_chars... DEBUG: ConfigParser: guess_mime_type->True DEBUG: ConfigParser: host_base->s3.[hidden] DEBUG: ConfigParser: host_bucket->%(bucket)s.s3.[hidden] DEBUG: ConfigParser: human_readable_sizes->False DEBUG: ConfigParser: ignore_failed_copy->False DEBUG: ConfigParser: invalidate_default_index_on_cf->False DEBUG: ConfigParser: invalidate_default_index_root_on_cf->True DEBUG: ConfigParser: invalidate_on_cf->False DEBUG: ConfigParser: list_md5->False DEBUG: ConfigParser: log_target_prefix-> DEBUG: ConfigParser: max_delete->-1 DEBUG: ConfigParser: mime_type-> DEBUG: ConfigParser: multipart_chunk_size_mb->15 DEBUG: ConfigParser: preserve_attrs->True DEBUG: ConfigParser: progress_meter->True DEBUG: ConfigParser: proxy_host-> DEBUG: ConfigParser: proxy_port->80 DEBUG: ConfigParser: put_continue->False DEBUG: ConfigParser: recursive->False DEBUG: ConfigParser: recv_chunk->4096 DEBUG: ConfigParser: reduced_redundancy->False DEBUG: ConfigParser: restore_days->1 DEBUG: ConfigParser: secret_key->[hidden] DEBUG: ConfigParser: send_chunk->4096 DEBUG: ConfigParser: server_side_encryption->False DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com DEBUG: ConfigParser: skip_existing->False DEBUG: ConfigParser: socket_timeout->300 DEBUG: ConfigParser: urlencoding_mode->normal DEBUG: ConfigParser: use_https->False DEBUG: ConfigParser: use_mime_magic->True DEBUG: ConfigParser: verbosity->WARNING DEBUG: ConfigParser: website_endpoint->http://%(bucket)s.s3-website-%(location)s.amazonaws.com/ DEBUG: ConfigParser: website_error-> DEBUG: ConfigParser: website_index->index.html DEBUG: ConfigParser: signature_v2->True DEBUG: Updating Config.Config cache_file -> DEBUG: Updating Config.Config follow_symlinks -> False DEBUG: Updating Config.Config verbosity -> 10 DEBUG: Unicodising 'sync' using UTF-8 DEBUG: Unicodising 's3://joar4/' using UTF-8 DEBUG: Unicodising 's3://joar5/' using UTF-8 DEBUG: Command: sync INFO: Retrieving list of remote files for s3://joar4/ ... DEBUG: CreateRequest: resource[uri]=/ DEBUG: Using signature v2 DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Thu, 26 Mar 2015 12:10:24 +0000\n/joar4/' DEBUG: Processing request, please wait... DEBUG: get_hostname(joar4): joar4.s3.[hidden] DEBUG: ConnMan.get(): creating new connection: http://joar4.s3.[hidden] DEBUG: non-proxied HTTPConnection(joar4.s3.[hidden]) DEBUG: format_uri(): / DEBUG: Sending request method_string='GET', uri='/', headers={'Authorization': 'AWS [hidden]', 'x-amz-date': 'Thu, 26 Mar 2015 12:10:24 +0000'}, body=(0 bytes) DEBUG: Response: {'status': 200, 'headers': {'date': 'Thu, 26 Mar 2015 12:10:24 GMT', 'content-length': '564', 'content-type': 'application/xml', 'server': 'Riak CS'}, 'reason': 'OK', 'data': '<?xml version="1.0" encoding="UTF-8"?>joar41000falsebashrc2015-03-26T11:57:57.000Z"7fb86cd811b258064734ca97d964e0ac"3716STANDARD007c64d0a70aa87c1074f442125b4bbb460d37616303d4302c3353ff42a0cd5fjoar'} DEBUG: ConnMan.put(): connection put back to pool (http://joar4.s3.[hidden]#1) DEBUG: Applying --exclude/--include DEBUG: CHECK: bashrc DEBUG: PASS: u'bashrc' INFO: Retrieving list of remote files for s3://joar5/ ... DEBUG: CreateRequest: resource[uri]=/ DEBUG: Using signature v2 DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Thu, 26 Mar 2015 12:10:24 +0000\n/joar5/' DEBUG: Processing request, please wait... DEBUG: get_hostname(joar5): joar5.s3.[hidden] DEBUG: ConnMan.get(): creating new connection: http://joar5.s3.[hidden] DEBUG: non-proxied HTTPConnection(joar5.s3.[hidden]) DEBUG: format_uri(): / DEBUG: Sending request method_string='GET', uri='/', headers={'Authorization': 'AWS [hidden]', 'x-amz-date': 'Thu, 26 Mar 2015 12:10:24 +0000'}, body=(0 bytes) DEBUG: Response: {'status': 200, 'headers': {'date': 'Thu, 26 Mar 2015 12:10:25 GMT', 'content-length': '253', 'content-type': 'application/xml', 'server': 'Riak CS'}, 'reason': 'OK', 'data': '<?xml version="1.0" encoding="UTF-8"?>joar51000false'} DEBUG: ConnMan.put(): connection put back to pool (http://joar5.s3.[hidden]#1) DEBUG: Applying --exclude/--include INFO: Found 1 source files, 0 destination files INFO: Verifying attributes... DEBUG: Comparing filelists (direction: remote -> remote) DEBUG: CHECK: bashrc Summary: 1 source files to copy, 0 files at destination to delete DEBUG: String 'bashrc' encoded to 'bashrc' DEBUG: String 'bashrc' encoded to 'bashrc' DEBUG: CreateRequest: resource[uri]=/bashrc DEBUG: Using signature v2 DEBUG: SignHeaders: 'PUT\n\n\n\nx-amz-copy-source:/joar4/bashrc\nx-amz-date:Thu, 26 Mar 2015 12:10:25 +0000\nx-amz-metadata-directive:COPY\nx-amz-storage-class:STANDARD\n/joar5/bashrc' DEBUG: Processing request, please wait... DEBUG: get_hostname(joar5): joar5.s3.[hidden] DEBUG: ConnMan.get(): re-using connection: http://joar5.s3.[hidden]#1 DEBUG: format_uri(): /bashrc DEBUG: Sending request method_string='PUT', uri='/bashrc', headers={'Authorization': 'AWS [hidden]', 'x-amz-metadata-directive': 'COPY', 'x-amz-date': 'Thu, 26 Mar 2015 12:10:25 +0000', 'x-amz-copy-source': '/joar4/bashrc', 'x-amz-storage-class': 'STANDARD'}, body=(0 bytes) DEBUG: Response: {'status': 413, 'headers': {'date': 'Thu, 26 Mar 2015 12:10:25 GMT', 'content-length': '0', 'server': 'Riak CS'}, 'reason': 'Request Entity Too Large', 'data': ''} DEBUG: ConnMan.put(): connection put back to pool (http://joar5.s3.[hidden]#2) DEBUG: S3Error: 413 (Request Entity Too Large) DEBUG: HttpHeader: date: Thu, 26 Mar 2015 12:10:25 GMT DEBUG: HttpHeader: content-length: 0 DEBUG: HttpHeader: server: Riak CS ERROR: File s3://joar4/bashrc could not be copied: 413 (Request Entity Too Large): DEBUG: Process files that was not remote copied Done. Copied 1 files in 1.0 seconds, 1.00 files/s

JoarJ commented 9 years ago

the bucket I'm sync'ing in this test only got my .bashrc which is 3716 bytes ... And note that bucket sync functions fine on my local workstation with same config, same riak-cs, same buckets and so on. ( excet my workstation got python 2.7.5 )

JoarJ commented 9 years ago

if this doesn't happen at amazon it'll probably be riak-cs related ( I don't have an amazon account ) .

JoarJ commented 9 years ago

hmm, I created an amazon test account, no problems there, I wasn't able to reproduce . Consider this problem riak-cs related .

I'll contact basho and see what they have to say .

mdomsch commented 9 years ago

Closing pending resolution by basho.

binarytemple commented 9 years ago

I suspect the upload user may be triggering this bug : https://github.com/basho/riak_cs/issues/939 @mdomsch does that sound like a reasonable possibility?

mdomsch commented 9 years ago

Yes, it would appear so. The s3cmd [copy] command does not send a Content-Length header.

On Tue, Apr 7, 2015 at 11:26 AM, Bryan Hunt notifications@github.com wrote:

I suspect the upload user may be triggering this bug : basho/riak_cs#939 https://github.com/basho/riak_cs/issues/939 @mdomsch https://github.com/mdomsch does that sound like a reasonable possibility?

— Reply to this email directly or view it on GitHub https://github.com/s3tools/s3cmd/issues/510#issuecomment-90632451.