Closed JoarJ closed 9 years ago
the error response came from riak-cs. Perhaps it has a different size limit for single object transfers? Given we'd switch to multipart at 15MB by default, that would seem pretty odd. What does running with --debug return? (be sure to strip out any access keys from the result).
Thanks, Matt
On Thu, Mar 26, 2015 at 7:03 AM, Joar Jegleim notifications@github.com wrote:
I'm getting ERROR: File s3://joar4/bashrc could not be copied: 413 (Request Entity Too Large): when trying to sync between buckets .
It works fine on my workstation, but I get the bellow error when doing the same thing on CentOS / RHEL 6 . My workstation ( CentOS 7 ) got python 2.7.5, the servers got python 2.6.6, I suspect some python 2.6.6 issues (?)
NOTE: I'm using riak-cs, someone with an amazon account should see if they're able to reproduce.
joar@neptune:~$ s3cmd/s3cmd --version s3cmd version 1.5.2 joar@neptune:~$ s3cmd/s3cmd ls [...] 2015-03-26 11:56 s3://joar4 2015-03-26 11:57 s3://joar5 joar@neptune:~$ s3cmd/s3cmd ls s3://joar4 joar@neptune:~$ s3cmd/s3cmd ls s3://joar5 joar@neptune:~$ s3cmd/s3cmd put .bashrc s3://joar4/bashrc WARNING: Module python-magic is not available. Guessing MIME types based on file extensions. .bashrc -> s3://joar4/bashrc [1 of 1] 3716 of 3716 100% in 0s 49.37 kB/s done joar@neptune:~$ s3cmd/s3cmd sync s3://joar4/ s3://joar5/ Summary: 1 source files to copy, 0 files at destination to delete ERROR: File s3://joar4/bashrc could not be copied: 413 (Request Entity Too Large): Done. Copied 1 files in 1.0 seconds, 1.00 files/s joar@neptune:~$ uname -a Linux neptune.cosmicb.no 2.6.32-504.8.1.el6.x86_64 #1 https://github.com/s3tools/s3cmd/pull/1 SMP Wed Jan 28 21:11:36 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux joar@neptune:~$ cat /etc/redhat-release CentOS release 6.6 (Final) joar@neptune:~$ python -V Python 2.6.6 joar@neptune:~$
— Reply to this email directly or view it on GitHub https://github.com/s3tools/s3cmd/issues/510.
joar@neptune:~$ s3cmd/s3cmd -d sync s3://joar4/ s3://joar5/
DEBUG: ConfigParser: Reading file '/home/joar/.s3cfg'
DEBUG: ConfigParser: access_key->[hidden]
DEBUG: ConfigParser: access_token->
DEBUG: ConfigParser: add_encoding_exts->
DEBUG: ConfigParser: add_headers->
DEBUG: ConfigParser: bucket_location->US
DEBUG: ConfigParser: cache_file->
DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
DEBUG: ConfigParser: default_mime_type->binary/octet-stream
DEBUG: ConfigParser: delay_updates->False
DEBUG: ConfigParser: delete_after->False
DEBUG: ConfigParser: delete_after_fetch->False
DEBUG: ConfigParser: delete_removed->False
DEBUG: ConfigParser: dry_run->False
DEBUG: ConfigParser: enable_multipart->True
DEBUG: ConfigParser: encoding->UTF-8
DEBUG: ConfigParser: encrypt->False
DEBUG: ConfigParser: expiry_date->
DEBUG: ConfigParser: expiry_days->
DEBUG: ConfigParser: expiry_prefix->
DEBUG: ConfigParser: follow_symlinks->False
DEBUG: ConfigParser: force->False
DEBUG: ConfigParser: get_continue->False
DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
DEBUG: ConfigParser: guess_mime_type->True
DEBUG: ConfigParser: host_base->s3.[hidden]
DEBUG: ConfigParser: host_bucket->%(bucket)s.s3.[hidden]
DEBUG: ConfigParser: human_readable_sizes->False
DEBUG: ConfigParser: ignore_failed_copy->False
DEBUG: ConfigParser: invalidate_default_index_on_cf->False
DEBUG: ConfigParser: invalidate_default_index_root_on_cf->True
DEBUG: ConfigParser: invalidate_on_cf->False
DEBUG: ConfigParser: list_md5->False
DEBUG: ConfigParser: log_target_prefix->
DEBUG: ConfigParser: max_delete->-1
DEBUG: ConfigParser: mime_type->
DEBUG: ConfigParser: multipart_chunk_size_mb->15
DEBUG: ConfigParser: preserve_attrs->True
DEBUG: ConfigParser: progress_meter->True
DEBUG: ConfigParser: proxy_host->
DEBUG: ConfigParser: proxy_port->80
DEBUG: ConfigParser: put_continue->False
DEBUG: ConfigParser: recursive->False
DEBUG: ConfigParser: recv_chunk->4096
DEBUG: ConfigParser: reduced_redundancy->False
DEBUG: ConfigParser: restore_days->1
DEBUG: ConfigParser: secret_key->[hidden]
DEBUG: ConfigParser: send_chunk->4096
DEBUG: ConfigParser: server_side_encryption->False
DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
DEBUG: ConfigParser: skip_existing->False
DEBUG: ConfigParser: socket_timeout->300
DEBUG: ConfigParser: urlencoding_mode->normal
DEBUG: ConfigParser: use_https->False
DEBUG: ConfigParser: use_mime_magic->True
DEBUG: ConfigParser: verbosity->WARNING
DEBUG: ConfigParser: website_endpoint->http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
DEBUG: ConfigParser: website_error->
DEBUG: ConfigParser: website_index->index.html
DEBUG: ConfigParser: signature_v2->True
DEBUG: Updating Config.Config cache_file ->
DEBUG: Updating Config.Config follow_symlinks -> False
DEBUG: Updating Config.Config verbosity -> 10
DEBUG: Unicodising 'sync' using UTF-8
DEBUG: Unicodising 's3://joar4/' using UTF-8
DEBUG: Unicodising 's3://joar5/' using UTF-8
DEBUG: Command: sync
INFO: Retrieving list of remote files for s3://joar4/ ...
DEBUG: CreateRequest: resource[uri]=/
DEBUG: Using signature v2
DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Thu, 26 Mar 2015 12:10:24 +0000\n/joar4/'
DEBUG: Processing request, please wait...
DEBUG: get_hostname(joar4): joar4.s3.[hidden]
DEBUG: ConnMan.get(): creating new connection: http://joar4.s3.[hidden]
DEBUG: non-proxied HTTPConnection(joar4.s3.[hidden])
DEBUG: format_uri(): /
DEBUG: Sending request method_string='GET', uri='/', headers={'Authorization': 'AWS [hidden]', 'x-amz-date': 'Thu, 26 Mar 2015 12:10:24 +0000'}, body=(0 bytes)
DEBUG: Response: {'status': 200, 'headers': {'date': 'Thu, 26 Mar 2015 12:10:24 GMT', 'content-length': '564', 'content-type': 'application/xml', 'server': 'Riak CS'}, 'reason': 'OK', 'data': '<?xml version="1.0" encoding="UTF-8"?>
the bucket I'm sync'ing in this test only got my .bashrc which is 3716 bytes ... And note that bucket sync functions fine on my local workstation with same config, same riak-cs, same buckets and so on. ( excet my workstation got python 2.7.5 )
if this doesn't happen at amazon it'll probably be riak-cs related ( I don't have an amazon account ) .
hmm, I created an amazon test account, no problems there, I wasn't able to reproduce . Consider this problem riak-cs related .
I'll contact basho and see what they have to say .
Closing pending resolution by basho.
I suspect the upload user may be triggering this bug : https://github.com/basho/riak_cs/issues/939 @mdomsch does that sound like a reasonable possibility?
Yes, it would appear so. The s3cmd [copy] command does not send a Content-Length header.
On Tue, Apr 7, 2015 at 11:26 AM, Bryan Hunt notifications@github.com wrote:
I suspect the upload user may be triggering this bug : basho/riak_cs#939 https://github.com/basho/riak_cs/issues/939 @mdomsch https://github.com/mdomsch does that sound like a reasonable possibility?
— Reply to this email directly or view it on GitHub https://github.com/s3tools/s3cmd/issues/510#issuecomment-90632451.
I'm getting ERROR: File s3://joar4/bashrc could not be copied: 413 (Request Entity Too Large): when trying to sync between buckets .
It works fine on my workstation, but I get the bellow error when doing the same thing on CentOS / RHEL 6 . My workstation ( CentOS 7 ) got python 2.7.5, the servers got python 2.6.6, I suspect some python 2.6.6 issues (?)
NOTE: I'm using riak-cs, someone with an amazon account should see if they're able to reproduce.
joar@neptune:~$ s3cmd/s3cmd --version s3cmd version 1.5.2 joar@neptune:~$ s3cmd/s3cmd ls [...] 2015-03-26 11:56 s3://joar4 2015-03-26 11:57 s3://joar5 joar@neptune:~$ s3cmd/s3cmd ls s3://joar4 joar@neptune:~$ s3cmd/s3cmd ls s3://joar5 joar@neptune:~$ s3cmd/s3cmd put .bashrc s3://joar4/bashrc WARNING: Module python-magic is not available. Guessing MIME types based on file extensions. .bashrc -> s3://joar4/bashrc [1 of 1] 3716 of 3716 100% in 0s 49.37 kB/s done joar@neptune:~$ s3cmd/s3cmd sync s3://joar4/ s3://joar5/ Summary: 1 source files to copy, 0 files at destination to delete ERROR: File s3://joar4/bashrc could not be copied: 413 (Request Entity Too Large): Done. Copied 1 files in 1.0 seconds, 1.00 files/s joar@neptune:~$ uname -a Linux neptune.cosmicb.no 2.6.32-504.8.1.el6.x86_64 #1 SMP Wed Jan 28 21:11:36 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux joar@neptune:~$ cat /etc/redhat-release CentOS release 6.6 (Final) joar@neptune:~$ python -V Python 2.6.6 joar@neptune:~$