Closed octavd closed 4 years ago
Also i've tried with the following:
library("aws.s3")
Sys.setenv("AWS_ACCESS_KEY_ID" = "key",
"AWS_SECRET_ACCESS_KEY" = "secret",
"AWS_S3_ENDPOINT"="url")
get_bucket("bucket")
and i got the following error:
Error in curl::curl_fetch_memory(url, handle = handle): Empty reply from server
Traceback:
1. get_bucket("bucket")
2. s3HTTP(verb = "GET", bucket = bucket, query = query, parse_response = parse_response,
. ...)
3. httr::GET(url, H, query = query, show_progress, ...)
4. request_perform(req, hu$handle$handle)
5. request_fetch(req$output, req$url, handle)
6. request_fetch.write_memory(req$output, req$url, handle)
7. curl::curl_fetch_memory(url, handle = handle)
403 means your authentication tokes are incorrect - see the full error description for details. Please consult the documentation of your custom back-end for details on what it requires.
Hello @s-u ,
First of all thank you for the reply.
Secondly, there is no problem with the credentials.
I've tried with postman
putting the same key/secret/url
and it retreived the buckets.
Also, with python + boto3
with the same key/secret/url and it worked:
session = boto3.session.Session()
s3_client = session.client(
service_name='s3',
aws_access_key_id='aws_access_key_id',
aws_secret_access_key='aws_secret_access_key',
endpoint_url='url',
)
Unfortunately using this library it seems to not work.
I cannot view the curl that is made, appears truncated
.
Below is the full error displayed (using verbose = T):
Locating credentials
Checking for credentials in user-supplied values
Using user-supplied value for AWS Access Key ID
Using user-supplied value for AWS Secret Access Key
Using default value for AWS Region ('us-east-1')
Non-AWS base URL requested.
S3 Request URL: curstom_url
Executing request with AWS credentials
Locating credentials
Checking for credentials in user-supplied values
Using user-supplied value for AWS Access Key ID
Using user-supplied value for AWS Secret Access Key
Using default value for AWS Region ('us-east-1')
Parsing AWS API response
Client error: (403) Forbidden
List of 4
$ Code : chr "AccessDenied"
$ Message : chr "Access Denied"
$ Resource : list()
$ RequestId: chr "id"
- attr(*, "headers")=List of 7
..$ server : chr "S3 Server"
..$ x-amz-id-2 : chr "id"
..$ x-amz-request-id: chr "id"
..$ content-type : chr "application/xml"
..$ content-length : chr "174"
..$ date : chr "Fri, 22 May 2020 15:24:50 GMT"
..$ connection : chr "keep-alive"
..- attr(*, "class")= chr [1:2] "insensitive" "list"
- attr(*, "class")= chr "aws_error"
- attr(*, "request_canonical")= chr "GET\n/\n\nhost:custom_url\nx-amz-date:20200522T152450Z\n\nho"| __truncated__
- attr(*, "request_string_to_sign")= chr "AWS4-HMAC-SHA256\n20200522T152450Z\n20200522/us-east-1/s3/aws4_request\"| __truncated__
- attr(*, "request_signature")= chr "AWS4-HMAC-SHA256 Credential=key/20200522/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-d"| __truncated__
NULL
Error in parse_aws_s3_response(r, Sig, verbose = verbose): Forbidden (HTTP 403).
Traceback:
1. bucketlist(base_url = "url",
. region = "", use_https = TRUE, key = "key",
. secret = "secret", verbose = T)
2. s3HTTP(verb = "GET", ...)
3. parse_aws_s3_response(r, Sig, verbose = verbose)
4. httr::stop_for_status(r)
Could you please advise?
Thank you very much!
Ok, thanks, I'm wondering if your back-end has issues with the signature. The issue here is that we cannot really look into anything unless you provide details on the backend - and if it is something proprietary it may be impossible to reproduce.
The other way to go about it would be to look at the request from boto3 and compare it. Note that you can simply use
s3HTTP(base_url="url", region = "", use_https=TRUE, key="key",
. secret="secret", parse_response=FALSE)
to return the actual response object with all details.
Hello again, @s-u,
I did some more debugging with IntelliJ and i've found out the following:
the S3HTTP makes a headers
list that contains the following (IntelliJ screenshot debug):
x-amz-date = (136 B) "20200527T115333Z"
x-amz-content-sha256 = (232 B) "content-sha256"
Authorization = (296 B) "AWS4-HMAC-SHA256 Credential=key/20200527/us-east-1/s3/aws4_request, SignedHeaders=;host;x-amz-date, Signature=signature"
Looking in the documentation of AWS for "Using Authorization Header" i've noticed that for SignedHeaders is an alphabetically sorted, semicolon-separated list of lowercase request header names. The request headers in the list are the same headers that you included in the CanonicalHeaders string. For example, for the previous example, the value of SignedHeaders would be as follows:
host;x-amz-content-sha256;x-amz-date
but as you can see that the request contains only:
SignedHeaders=;host;x-amz-date
I've tested and if we add the x-amz-content-sha256
in the SignedHeaders from the Authorization header everything works and the bucketlist is displayed.
@octavd thanks for digging! I think I got the issue. The problem is the the design of https://github.com/cloudyr/aws.signature creates a catch-22 situation: the aws.signature::signature_v4_auth()
creates the signature and computes the body hash, but does NOT create the x-amz-content-sha256
header, so the signature is computed without it, hence it cannot be on the list of canonical headers. But in order to add it from the outside, one would have to include x-amz-content-sha256
in the list of canonical headers with the hash of the payload, which we don't have until we get the result of :signature_v4_auth()
- which we can't get without the value of x-amz-content-sha256
.
So I would recommend filing an issue with https://github.com/cloudyr/aws.signature against signature_v4_auth()
to provide an option to generate x-amz-content-sha256
and add it to the canonical headers since that would be the place to do it.
That said, I have added a work-around to aws.s3
so the tit computes the body hash itself before calling aws.signature
, so please see if that fixes your problem
@s-u works like a charm now! thank you very much for this fix. I see that you've created the issue on aws.signature repo. Also, could you please tell me when this will be available on cran? For the moment i'm building the tar.gz with make and installing it from that.
Before filing an issue, please make sure you are using the latest development version which you can install using
install.packages("aws.s3",repo="https://rforge.net")
(see README) since the issue may have been fixed already. Also search existing issues first to avoid duplicates.Please specify whether your issue is about:
Having a custom S3 storage and i am getting 403 forbidden no matter i do when trying to retreive the buckets. Could you please tell me what i am doing wrong?
Put your code here: