jubos / fake-s3

A lightweight server clone of Amazon S3 that simulates most of the commands supported by S3 with minimal dependencies
2.94k stars 355 forks source link

[Bug] content encoding not being stored in metadata #226

Open dominicbarnes opened 6 years ago

dominicbarnes commented 6 years ago

I'm gzip compressing objects I'm loading into fake-s3, but that header doesn't seem to be being persisted in the metadata for the objects. (others like Content-Type seem to apply just fine)

I'm running this locally in a docker image, and both the golang sdk and aws-cli both experience the same issue, so I believe the problem is in fake-s3 itself. (I don't know ruby though, so I couldn't point to where right now)

Here are some steps to reproduce, they're pretty specific to my setup, but I will try my best to make it generic:

# assumes you have AWS creds in your env

$ echo 'hello world' | gzip | aws --endpoint http://localhost:4569 s3 cp - s3://test/test --content-encoding gzip --content-type text/plain

$ aws --endpoint http://localhost:4569 s3api head-object --bucket test --key test
{
    "AcceptRanges": "bytes",
    "ContentType": "text/plain",
    "LastModified": "Wed, 15 Nov 2017 21:48:00 GMT",
    "ContentLength": 13,
    "ETag": "\"c897d1410af8f2c74fba11b1db511e9e\"",
    "Metadata": {}
}

Looking in the metadata from fake-s3 itself yields about the same thing:

---
:md5: c897d1410af8f2c74fba11b1db511e9e
:content_type: text/plain
:size: 13
:modified_date: '2017-11-15T21:48:00.000Z'
:amazon_metadata: <REDACTED>
:custom_metadata: {}

In code, it appears like Content-Encoding should be handled in some capacity, but maybe I'm misunderstanding how it's supposed to function.

Thanks!