Open varontron opened 6 months ago
Thank you for your report, @varontron and I'm sorry that you are having trouble. I can't reproduce this issue myself, but we did ship a change recently that could have influenced the signature calculation.
A few next steps:
Can you see if this tag: latest-njs-oss-20240306 Which is the one right before that change fixes your issue? If it does that will give us a start.
Try with the latest image is specifying these two config options:
S3_STYLE=virtual-v2
S3_SERVICE=s3
I am attempting to migrate us to S3's current recommendations on pathing. I made every effort to keep the defaults the same but it's possible a bug snuck through my testing.
Another factor that can cause this is clock skew. I would ensure that your compute instance running the gateway is synced with a ntp server.
I'm having a similar issue when using the unprivileged-oss
image to serve files off a private bucket in MinIO. Switching to deprecated AWS signature version 2 helped me as well. Here's how the environment looks like in my case:
AWS_ACCESS_KEY_ID=<redacted>
AWS_SECRET_ACCESS_KEY=<redacted>
AWS_SIGS_VERSION=2
S3_BUCKET_NAME=<redacted>
S3_REGION=us-east-1
S3_SERVER=minio-headless.minio.svc.cluster.local
S3_SERVER_PORT=9000
S3_SERVER_PROTO=http
S3_STYLE=path
@spijet Thank you for the report, although I haven't been able to determine the cause of this issue yet each data point helps. Out of curiosity, did this issue come up recently after you'd been using the signature V4 successfully? I have a suspicion that a recent change could have introduced a bug for some use cases but I have not been able to reproduce the issue on any of my test setups.
Describe the bug
Receive "The request signature we calculated does not match the signature" error when attempting to load a resource from s3, with
proxy_intercept_errors
offTo reproduce
Steps to reproduce the behavior:
AWS_SIGS_VERSION=4
Expected behavior
Expect resource to load
Your environment
Version of the S3 container used (when downloaded from either Docker Hub or the GitHub Container Registry) ghcr.io/nginxinc/nginx-s3-gateway/nginx-oss-s3-gateway:latest-njs-oss from 2 days ago
Target deployment platform for the S3 container
S3 backend implementation (AWS, Ceph, NetApp StorageGrid, etc...) AWS
Authentication method (IAM, IAM with Fargate, IAM with K8S, AWS Credentials, etc...) IAM
Additional context
System was working flawlessly for months using
AWS_SIGS_VERSION=4
, launched with docker-compose. A couple days ago, I added an additional contaner spec todocker-compose.yml
and restarted. The problem seemed to arise when a new version of the gateway image was downloaded and deployed.Changing the config to use
AWS_SIGS_VERSION=2
re-enables the system and delivers the resources as expected.Regrettably I don't have a handy backup of the docker-compose.yml config, but I'm reasonably confident the
AWS_SIGS_VERSION
was not changed from2
to4
prior to the restart.Is it possible something in the latest image broke with regard to the
AWS_SIGS_VERSION
setting?Sensitive Information
Remember to redact any sensitive information such as authentication credentials or license keys.