s3tools / s3cmd

Official s3cmd repo -- Command line tool for managing S3 compatible storage services (including Amazon S3 and CloudFront).
https://s3tools.org/s3cmd
GNU General Public License v2.0
4.53k stars 902 forks source link

unable to get encrypted file - 400 (Bad Request) #708

Open gulycka opened 8 years ago

gulycka commented 8 years ago

Hello,

we have been using the s3cmd tool to sync whole S3 bucket locally. Everything works fine until we turn on server side encryption on files.

User, that is configured in s3cmd, has appropriate privileges for KMS key used for file encryption and privileges to read from bucket.

I tried to read the encrypted file with the same user via Amazon website console and there wasn’t any problem.

Below you can find debug output from getting single file encrypted with server side encryption:

$ s3cmd get s3://my_bucket/path_to_file/my_file ./ --debug
DEBUG: ConfigParser: Reading file '/home/user/.s3cfg'
DEBUG: ConfigParser: access_key->AK...17_chars...Q
DEBUG: ConfigParser: access_token->
DEBUG: ConfigParser: add_encoding_exts->
DEBUG: ConfigParser: add_headers->
DEBUG: ConfigParser: bucket_location->US
DEBUG: ConfigParser: cache_file->
DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
DEBUG: ConfigParser: default_mime_type->binary/octet-stream
DEBUG: ConfigParser: delay_updates->False
DEBUG: ConfigParser: delete_after->False
DEBUG: ConfigParser: delete_after_fetch->False
DEBUG: ConfigParser: delete_removed->False
DEBUG: ConfigParser: dry_run->False
DEBUG: ConfigParser: enable_multipart->True
DEBUG: ConfigParser: encoding->UTF-8
DEBUG: ConfigParser: encrypt->False
DEBUG: ConfigParser: expiry_date->
DEBUG: ConfigParser: expiry_days->
DEBUG: ConfigParser: expiry_prefix->
DEBUG: ConfigParser: follow_symlinks->False
DEBUG: ConfigParser: force->False
DEBUG: ConfigParser: get_continue->False
DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_passphrase->Yr...17_chars...h
DEBUG: ConfigParser: guess_mime_type->True
DEBUG: ConfigParser: host_base->s3.amazonaws.com
DEBUG: ConfigParser: host_bucket->%(bucket)s.s3.amazonaws.com
DEBUG: ConfigParser: human_readable_sizes->False
DEBUG: ConfigParser: ignore_failed_copy->False
DEBUG: ConfigParser: invalidate_default_index_on_cf->False
DEBUG: ConfigParser: invalidate_default_index_root_on_cf->True
DEBUG: ConfigParser: invalidate_on_cf->False
DEBUG: ConfigParser: list_md5->False
DEBUG: ConfigParser: log_target_prefix->
DEBUG: ConfigParser: max_delete->-1
DEBUG: ConfigParser: mime_type->
DEBUG: ConfigParser: multipart_chunk_size_mb->15
DEBUG: ConfigParser: preserve_attrs->True
DEBUG: ConfigParser: progress_meter->True
DEBUG: ConfigParser: proxy_host->
DEBUG: ConfigParser: proxy_port->0
DEBUG: ConfigParser: put_continue->False
DEBUG: ConfigParser: recursive->False
DEBUG: ConfigParser: recv_chunk->4096
DEBUG: ConfigParser: reduced_redundancy->False
DEBUG: ConfigParser: restore_days->1
DEBUG: ConfigParser: secret_key->rt...37_chars...L
DEBUG: ConfigParser: send_chunk->4096
DEBUG: ConfigParser: server_side_encryption->False
DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
DEBUG: ConfigParser: skip_existing->False
DEBUG: ConfigParser: socket_timeout->300
DEBUG: ConfigParser: urlencoding_mode->normal
DEBUG: ConfigParser: use_https->True
DEBUG: ConfigParser: use_mime_magic->True
DEBUG: ConfigParser: verbosity->WARNING
DEBUG: ConfigParser: website_endpoint->http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
DEBUG: ConfigParser: website_error->
DEBUG: ConfigParser: website_index->index.html
DEBUG: Updating Config.Config cache_file ->
DEBUG: Updating Config.Config encoding -> UTF-8
DEBUG: Updating Config.Config follow_symlinks -> False
DEBUG: Updating Config.Config verbosity -> 10
DEBUG: Unicodising 'get' using UTF-8
DEBUG: Unicodising 's3://my_bucket/path_to_file/my_file' using UTF-8
DEBUG: Unicodising './' using UTF-8
DEBUG: Command: get
DEBUG: Applying --exclude/--include
DEBUG: CHECK: my_file
DEBUG: PASS: u'my_file'
INFO: Summary: 1 remote files to download
DEBUG: DeUnicodising u'./my_file' using UTF-8
DEBUG: Unicodising './my_file' using UTF-8
DEBUG: String 'path_to_file/my_file' encoded to 'path_to_file/my_file'
DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Tue, 01 Mar 2016 11:34:31 +0000\n/my_bucket/path_to_file/my_file'
DEBUG: CreateRequest: resource[uri]=/path_to_file/my_file
DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Tue, 01 Mar 2016 11:34:31 +0000\n/my_bucket/path_to_file/my_file'
s3://my_bucket/path_to_file/my_file -> ./my_file  [1 of 1]
DEBUG: get_hostname(my_bucket): my_bucket.s3.amazonaws.com
DEBUG: ConnMan.get(): creating new connection: https://my_bucket.s3.amazonaws.com
DEBUG: format_uri(): /path_to_file/my_file
DEBUG: Response: {'status': 400, 'headers': {'x-amz-region': 'us-east-1', 'x-amz-id-2': 'SOME_ID=', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'connection': 'close', 'x-amz-request-id': 'SOME_REQUEST_ID', 'date': 'Tue, 01 Mar 2016 11:34:31 GMT', 'content-type': 'application/xml'}, 'reason': 'Bad Request'}
DEBUG: S3Error: 400 (Bad Request)
DEBUG: HttpHeader: x-amz-region: us-east-1
DEBUG: HttpHeader: x-amz-id-2: SOME_ID=
DEBUG: HttpHeader: server: AmazonS3
DEBUG: HttpHeader: transfer-encoding: chunked
DEBUG: HttpHeader: connection: close
DEBUG: HttpHeader: x-amz-request-id: SOME_REQUEST_ID
DEBUG: HttpHeader: date: Tue, 01 Mar 2016 11:34:31 GMT
DEBUG: HttpHeader: content-type: application/xml
DEBUG: object_get failed for './my_file', deleting...
ERROR: S3 error: 400 (Bad Request):

We are able to download files from the same bucket, that are not encrypted. The only problem is during downloading of the encrypted files.

Best Regards.

fviard commented 8 years ago

I don't see any "kms_key" in you debug log, are you sure that you didn't forgot to set it for the config file used by the get?

gulycka commented 8 years ago

I didn't set KMS key in the config file, because according to Amazon documentation, there is no need to do so:

With SSE-KMS, Amazon S3 will automatically decrypt the log files so that you do not need to make any changes your application. As always, you need to make sure that your application has appropriate permissions, i.e. Amazon S3 GetObject and KMS Decrypt permissions.

Correct me if I'm wrong, but you have to use --server-side-encryption-kms-id=KEY_ID only to be able to put data to S3 bucket, not to get data from S3 bucket.

jmleoni commented 8 years ago

I am noticing the same behaviour as gulycka