Closed a2geek closed 3 years ago
@a2geek - Thanks for the detailed report, it definitely looks like that is causing the problem. If I recall it has to do with s3 wanting that specific host (rather than one of the regionalized hosts). If a non-aws url is specified it could use whatever the host is (as it did prior to that patch), but I think for AWS it should probably maintain the same behavior. Does that make sense?
Agreed. Uncertain how to accomplish that. I'm pretty certain I saw that setting endpoint
internally sets host
, port
, and protocol
(going by memory so reality may vary a bit). So I think @host
is always set? Maybe figure out if it's an S3 endpoint and act accordingly?
Yeah, I think checking host probably would make sense. And then we could have s3 behavior only if it is either *.amazonaws.com
or *.amazonaws.com.cn
. Does that line up with what you were thinking?
Yeah, sounds like a decent strategy.
This issue has been marked inactive and will be closed if no further activity occurs.
Hey,
has this ever been resolved? We're currently having exactly this issue using the fog-aws version together with a self-hosted GitLab instance. We're trying to use B2/Backblaze S3 compatible API. Also see here: https://docs.gitlab.com/ee/administration/object_storage.html
@max-critcrew - Not that I'm aware of. I don't recall how I got around it but I ultimately ended up using a different tool to setup connections. (Note that I'm not a Ruby programmer and I was configuring Cloud Foundry, so that solution is likely not useful to you.)
Rereading the discussion, I think I had an idea where the issue occurred... and it sounds like the Fog team was willing to look at a PR...
Oh, one additional note. I poked around in the Genesis configs for MinIO, and I see how they setup the region... maybe that is the trick? (I've moved on so can't really try this myself any more.)
fog_connection:
provider: AWS
endpoint: (( grab params.blobstore_minio_endpoint ))
aws_access_key_id: ((blobstore_access_key_id))
aws_secret_access_key: ((blobstore_secret_access_key))
aws_signature_version: '2'
region: "''" # <-- THIS??
path_style: true
@a2geek thanks for the extra context. I could imagine the region trick may have been needed, but I think with other more recent changes that setting the endpoint should be good-enough. I also discussed more with @max-critcrew on another issue and it sounds like he at least found a workaround for his particular case.
Wonderful. Thanks for the update!
I'd like to use Fog to connect to an internal S3 provider (custom host).
What I've been finding is that some operations work but others mysteriously connect to
s3.amazonaws.com
.Pulling some samples together, I can write and read from a bucket but not list buckets - that mysteriously switches over to the AWS url.
Digging a little further, I see that
get_service.rb
(see here) seems to be the only place where the Amazon AWS URL is assigned except outside of the common setup logic. I'd submit a PR but I also see the last comment was "set correct host for get service operation" -- so I wonder if actual AWS S3 connections require something different than non-Amazon connections.My sample code, in case that is relevant:
The questionable code in
Fog::AWS::Storage::Real#get_service
When I comment out host, it fixes my issue.
The stack trace I get is:
For clarity, nowhere do I specify
s3.amazonaws.com
in the sample code.Thanks!