I'm gzip compressing objects I'm loading into fake-s3, but that header doesn't seem to be being persisted in the metadata for the objects. (others like Content-Type seem to apply just fine)
I'm running this locally in a docker image, and both the golang sdk and aws-cli both experience the same issue, so I believe the problem is in fake-s3 itself. (I don't know ruby though, so I couldn't point to where right now)
Here are some steps to reproduce, they're pretty specific to my setup, but I will try my best to make it generic:
# assumes you have AWS creds in your env
$ echo 'hello world' | gzip | aws --endpoint http://localhost:4569 s3 cp - s3://test/test --content-encoding gzip --content-type text/plain
$ aws --endpoint http://localhost:4569 s3api head-object --bucket test --key test
{
"AcceptRanges": "bytes",
"ContentType": "text/plain",
"LastModified": "Wed, 15 Nov 2017 21:48:00 GMT",
"ContentLength": 13,
"ETag": "\"c897d1410af8f2c74fba11b1db511e9e\"",
"Metadata": {}
}
Looking in the metadata from fake-s3 itself yields about the same thing:
I'm gzip compressing objects I'm loading into fake-s3, but that header doesn't seem to be being persisted in the metadata for the objects. (others like
Content-Type
seem to apply just fine)I'm running this locally in a docker image, and both the golang sdk and aws-cli both experience the same issue, so I believe the problem is in fake-s3 itself. (I don't know ruby though, so I couldn't point to where right now)
Here are some steps to reproduce, they're pretty specific to my setup, but I will try my best to make it generic:
Looking in the metadata from fake-s3 itself yields about the same thing:
In code, it appears like
Content-Encoding
should be handled in some capacity, but maybe I'm misunderstanding how it's supposed to function.Thanks!