Open alexalegre-wf opened 7 years ago
I also have the same problem. As a note, it seems the file structure for fake-s3 is /fakes3_root/<sub_domain_name>/<bucket>/<key>
. So if you had your url as something like https://aaa.bbb.ccc.com:4569
, you would get /fakes3_root/aaa/<bucket>/<key>
. Is there a way to remove the dependency on the subdomain name when creating the directories as a temporary solution?
@alexalegre-wf Here's a related issue: https://github.com/jubos/fake-s3/issues/114. The current workaround is to use the -H flag to get around code path checking against @root_hostnames
, in your case -H fake-s3
EDIT: The bucket becomes the subdomain of the host, so actually you should put -H fake-s3.<rest-of-host>
instead, i.e. -H fake-s3.somedomain.com
@Rhathe Thanks!
I'm using boto3 to access local fake-s3 for testing -- when I called copy_object, in an attempt to copy an object from one local bucket to another, this error occurred:
The code causing the issue is essentially:
(with appropriate params to point to fake-s3)
Digging in a bit it appears that the issue is the source file path that
copy_object
creates iswhere this is the expected, actual path:
file store appears to expect the string 'fake-s3' as the
src_bucket_name
, just asdst_bucket_name
is 'fake-s3', and thesrc_name
to bebucket/object_id
, just asdst_name
is.Changing the put request normalisation code to:
seems to have cleared up any issues copying objects between buckets, but breaks fetching HEAD on objects.