Open inssein opened 4 years ago
@inssein did some investigation and I believe this is related to a regression found in Apache's httpclient
version 4.5.9. DefaultHostNameVerifier stopped matching certificates for hostnames with wildcards: https://issues.apache.org/jira/browse/HTTPCLIENT-1997
We will work in upgrading the httpclient
version in the SDK but in the meantime you can override the version in your project.
@debora-ito ahh thats good to know, thanks.
@debora-ito rather late update, but I just tried pinning one of my services to 4.5.12 (same version used in dropwizard), and I am still getting the same error.
@inssein The exactly same error? According to the Jira issue I linked above it was fixed in 4.5.10.
Can you check if your environment is resolving the dependency version to 4.5.12? If you are using maven you can run mvn dependency:tree
.
Yup, exactly the same error. I have ran mvn dependency:tree
and ensured all of them are pointing to 4.5.12 (and they are, because we use dropwizard as a framework, and it has the httpclient pinned at that).
The only thing I am going to do next is try the url-connection http builder and see if it resolves the issues.
I have the code running with the url connection http service, and everything seems to be running smooth, which points to an issue with the apache http client.
@debora-ito it does seem like it should be fixed in the new version, but mvn dependency:tree
for sure shows 4.5.12. Anything else I can do to confirm that it is using the right dep?
Can you share your dependency tree? Just the part regarding Apache httpcomponents -
mvn dependency:tree -Dverbose -Dincludes=org.apache.httpcomponents
[INFO] ---------------------< com.sednanetwork:sedna-db >----------------------
[INFO] Building sedna-db 1.0-SNAPSHOT [13/14]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ sedna-db ---
[INFO] com.sednanetwork:sedna-db:jar:1.0-SNAPSHOT
[INFO] +- com.sednanetwork:elasticsearch:jar:1.0-SNAPSHOT:compile
[INFO] | \- org.elasticsearch.client:elasticsearch-rest-client:jar:6.4.3:compile
[INFO] | +- org.apache.httpcomponents:httpclient:jar:4.5.12:compile (version managed from 4.5.2)
[INFO] | +- org.apache.httpcomponents:httpcore:jar:4.4.13:compile (version managed from 4.4.5)
[INFO] | +- org.apache.httpcomponents:httpasyncclient:jar:4.1.2:compile
[INFO] | \- org.apache.httpcomponents:httpcore-nio:jar:4.4.5:compile
[INFO] \- software.amazon.awssdk:s3:jar:2.13.11:compile
[INFO] \- software.amazon.awssdk:apache-client:jar:2.13.11:runtime
[INFO] +- (org.apache.httpcomponents:httpclient:jar:4.5.12:runtime - version managed from 4.5.9; omitted for duplicate)
[INFO] \- (org.apache.httpcomponents:httpcore:jar:4.4.13:runtime - version managed from 4.4.11; omitted for duplicate)
[INFO]
[INFO] ----------------< com.sednanetwork:sedna-veson-service >----------------
[INFO] Building sedna-veson-service 1.0-SNAPSHOT [14/14]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ sedna-veson-service ---
[INFO] com.sednanetwork:sedna-veson-service:jar:1.0-SNAPSHOT
[INFO] +- com.sednanetwork:sedna-db:jar:1.0-SNAPSHOT:compile
[INFO] | \- com.sednanetwork:elasticsearch:jar:1.0-SNAPSHOT:compile
[INFO] | \- org.elasticsearch.client:elasticsearch-rest-client:jar:6.4.3:compile
[INFO] | +- org.apache.httpcomponents:httpclient:jar:4.5.12:compile (version managed from 4.5.2)
[INFO] | +- org.apache.httpcomponents:httpcore:jar:4.4.13:compile (version managed from 4.4.5)
[INFO] | +- org.apache.httpcomponents:httpasyncclient:jar:4.1.2:compile
[INFO] | \- org.apache.httpcomponents:httpcore-nio:jar:4.4.5:compile
[INFO] \- software.amazon.awssdk:sqs:jar:2.13.11:compile
[INFO] \- software.amazon.awssdk:apache-client:jar:2.13.11:runtime
[INFO] +- (org.apache.httpcomponents:httpclient:jar:4.5.12:compile - version managed from 4.5.9; scope updated from runtime; omitted for duplicate)
[INFO] \- (org.apache.httpcomponents:httpcore:jar:4.4.13:compile - version managed from 4.4.11; scope updated from runtime; omitted for duplicate)
This is very interesting, I'm also getting the same exact issue with 4.5.12. I saw some promising mods that people have done, but nothing fixes this problem, even replacing the trust store to trust all certs. But I can issue my REST request via Postman and in the Firefox 76.0 browser with no problems.
Yup, I gave up in the end and just used the URLConnectionClient for now as the service didn't require high performance, but have a todo to switch it out once this dependency is upgraded.
Hi @debora-ito , I've observed endpoints .s3.amazonaws.com and .s3.us-east-1.amazonaws.com return different certificates.
Global endpoint:
$ true | openssl s_client -connect some-bucket.s3.amazonaws.com:443 2>/dev/null | openssl x509 -noout -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
08:2d:f6:8e:e9:c6:93:15:be:bf:72:07:9b:38:10:fd
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, O=DigiCert Inc, OU=www.digicert.com, CN=DigiCert Baltimore CA-2 G2
Validity
Not Before: Nov 9 00:00:00 2019 GMT
Not After : Mar 12 12:00:00 2021 GMT
Subject: C=US, ST=Washington, L=Seattle, O=Amazon.com, Inc., CN=*.s3.amazonaws.com
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
...
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:*.s3.amazonaws.com, DNS:s3.amazonaws.com
...
Regional endpoint:
$ true | openssl s_client -connect some-bucket.s3.us-east-1.amazonaws.com:443 2>/dev/null | openssl x509 -noout -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
0d:64:50:6b:45:f3:0c:e3:5a:6c:2d:df:2c:18:b4:37
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, O=DigiCert Inc, OU=www.digicert.com, CN=DigiCert Baltimore CA-2 G2
Validity
Not Before: Aug 4 00:00:00 2020 GMT
Not After : Aug 9 12:00:00 2021 GMT
Subject: C=US, ST=Washington, L=Seattle, O=Amazon.com, Inc., CN=s3.amazonaws.com
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
...
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:s3.amazonaws.com, DNS:*.s3.amazonaws.com, DNS:*.s3.dualstack.us-east-1.amazonaws.com, DNS:s3.dualstack.us-east-1.amazonaws.com, DNS:*.s3.us-east-1.amazonaws.com, DNS:s3.us-east-1.amazonaws.com, DNS:*.s3-control.us-east-1.amazonaws.com, DNS:s3-control.us-east-1.amazonaws.com, DNS:*.s3-control.dualstack.us-east-1.amazonaws.com, DNS:s3-control.dualstack.us-east-1.amazonaws.com, DNS:*.s3-accesspoint.us-east-1.amazonaws.com, DNS:*.s3-accesspoint.dualstack.us-east-1.amazonaws.com, DNS:*.s3.us-east-1.vpce.amazonaws.com
....
So, this code:
import software.amazon.awssdk.regions.Region
import software.amazon.awssdk.services.s3.S3Client
import software.amazon.awssdk.services.s3.model.DeleteObjectRequest
import software.amazon.awssdk.services.s3.model.DeleteObjectsRequest
import software.amazon.awssdk.services.s3.model.GetObjectRequest
import software.amazon.awssdk.services.s3.model.ListObjectsV2Request
import software.amazon.awssdk.services.s3.model.ObjectIdentifier
import software.amazon.awssdk.services.s3.model.S3Object
import software.amazon.awssdk.services.s3.presigner.S3Presigner
fun main() {
val bucket = awsConfiguration.bucket
val prefix = "foo/bar"
val listObjectsV2PaginatorResult = s3Client.listObjectsV2Paginator(
ListObjectsV2Request
.builder()
.bucket(bucket)
.prefix(prefix)
.build()
)
val keys: List<String> = listObjectsV2PaginatorResult
.contents()
.stream()
.map { it.key() }
.toList()
logger.info { "==================>>>>> KEYS: $keys" }
}
Worked in EU-CENTRAL-1 , but returned SdkClientException: Unable to execute HTTP request: Certificate for
Full stack trace:
software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Certificate for <some-bucket.s3.amazonaws.com> doesn't match any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]
at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:97)
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage$RetryExecutor.handleThrownException(RetryableStage.java:137)
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage$RetryExecutor.execute(RetryableStage.java:95)
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:63)
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:43)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:57)
at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:37)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:81)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:61)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:43)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:198)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:122)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:148)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:102)
at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45)
at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:55)
at software.amazon.awssdk.services.s3.DefaultS3Client.listObjectsV2(DefaultS3Client.java:4926)
at software.amazon.awssdk.services.s3.paginators.ListObjectsV2Iterable$ListObjectsV2ResponseFetcher.nextPage(ListObjectsV2Iterable.java:147)
at software.amazon.awssdk.services.s3.paginators.ListObjectsV2Iterable$ListObjectsV2ResponseFetcher.nextPage(ListObjectsV2Iterable.java:138)
at software.amazon.awssdk.core.pagination.sync.PaginatedResponsesIterator.next(PaginatedResponsesIterator.java:58)
at software.amazon.awssdk.core.pagination.sync.PaginatedItemsIterable$ItemsIterator.<init>(PaginatedItemsIterable.java:58)
at software.amazon.awssdk.core.pagination.sync.PaginatedItemsIterable.iterator(PaginatedItemsIterable.java:48)
at java.lang.Iterable.spliterator(Iterable.java:101)
at software.amazon.awssdk.core.pagination.sync.SdkIterable.stream(SdkIterable.java:34)
Setting AWS_S3_US_EAST_1_REGIONAL_ENDPOINT to regional fixed it.
But I believe this is a bug because the exception is raised even if the region is being explicitly passed to the S3Client builder:
S3Client s3Client = S3Client.builder()
.region(Region.of(region))
.credentialsProvider(DefaultCredentialsProvider.create())
.build();
@raonitimo that is the expected behavior when providing us-east-1
as a region, the SDK defaults to the S3 global endpoint for legacy reasons. It would be a breaking change to make the SDK hit the us-east-1
regional endpoint by default, so using the AWS_S3_US_EAST_1_REGIONAL_ENDPOINT
flag is the right way to do it.
I'm having the exact same issue as described here.
I have got it working using the UrlConnectionHttpClient
instead, but would like to use the ApacheHttpClient
for its performance.
@debora-ito, @raonitimo would this be the correct way to get the SDK to pick up the AWS_S3_US_EAST_1_REGIONAL_ENDPOINT
setting:
AWS_S3_US_EAST_1_REGIONAL_ENDPOINT=regional java -jar my_app.jar
For a Spring app with a bean configuration like:
@Bean
public S3Client s3Client() {
return S3Client.builder()
.httpClient(ApacheHttpClient.builder().build())
.build();
}
This is my stack trace:
SdkClientException: Unable to execute HTTP request:
Certificate for <my-app.s3.amazonaws.com>
doesn't match any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]
The S3 bucket I'm getting the above error from is in US East (N. Virginia).
I also have made sure that the apache client i've installed is the version stated as having the fix:
[INFO] | +- software.amazon.awssdk:apache-client:jar:2.15.0:compile
[INFO] | | +- org.apache.httpcomponents:httpclient:jar:4.5.12:compile
Is anyone still encountering this with the latest SDK and Apache version? We're not able to reproduce in our testing.
Same issue here on us-east-1 using java SDK version 2. The only affected region is us-east-1:
Caused by: javax.net.ssl.SSLPeerUnverifiedException: Certificate for <some-dashed-string-us-east-1.s3.amazonaws.com> doesn't match any of the subject alternative nam es: [*.s3.amazonaws.com, s3.amazonaws.com]
Setting the AWS_S3_US_EAST_1_REGIONAL_ENDPOINT to 'regional' didn't work. Can anybody please help with this?
Ya I never resolved this issue either. We still use v1 for S3 because of this
Get Outlook for Androidhttps://aka.ms/ghei36
From: Fran @.> Sent: Wednesday, July 28, 2021 6:15:35 PM To: aws/aws-sdk-java-v2 @.> Cc: dlavelle7 @.>; Comment @.> Subject: Re: [aws/aws-sdk-java-v2] SSLPeerUnverifiedException on S3 actions (#1786)
Same issue here on us-east-1 using java SDK version 2. The only affected region is us-east-1:
Caused by: javax.net.ssl.SSLPeerUnverifiedException: Certificate for
Setting the AWS_S3_US_EAST_1_REGIONAL_ENDPOINT to 'regional' didn't work. Can anybody please help with this?
— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/aws/aws-sdk-java-v2/issues/1786#issuecomment-888479960, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ABE372LNGSFCNNQLHSLBU3LT2A3LPANCNFSM4MNBZPAA.
In the same boat. One workaround is to force a very specific Apache version: 4.5.10.
A dozen of our customers are hitting this issue. Based on our reports, it seems that the customers hitting this issue can always reproduce it. Unfortunately, the problem is not reproducible on our side, and often, a customer cannot reproduce this on a local machine but only on a CI pipeline. As people noted here, the issue happens only when using recent Apache versions. That is, 4.5.10 works but 4.5.12 or 4.5.13 fails. Our customers don't use Java SDK at all, so the issue is fundamentally unrelated to the SDK. So one workaround would be to force Apache 4.5.10. However, do note that using an older version prior to 4.5.10 may fail with the same error, so make sure you enforce the right version.
I've been debugging and testing the Apache code for some time, but I haven't found anything that can go wrong. Now I start to suspect that this might be an issue of Amazon where some regional factor plays a role. It seems only Amazon S3 that has this problem.
And I think using AWS_S3_US_EAST_1_REGIONAL_ENDPOINT
(applicable only when using Java SDK) only circumvents this issue for some customers just by using different endpoint and certificate.
Same issue appears:
apache-http client version: 4.5.13
aws region - eu-central 1
Unable to execute HTTP request: Certificate for <x.x.x.x.x.x.x.s3.amazonaws.com> doesn't match any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]
@chanseokoh thanks for the suggesting, will try explicitly downgrading lib version.
Hi @BMalaichik did it work downgrading lib version 🤔
@dieggoluis for me it didn't
The source of issue was that s3 bucket names with dots like app.data.xxx
lead to incorrect https certificate resolution
From docs
For best compatibility, we recommend that you avoid using dots (.) in bucket names, except for buckets that are used only for static website hosting. If you include dots in a bucket's name, you can't use virtual-host-style addressing over HTTPS, unless you perform your own certificate validation. This is because the security certificates used for virtual hosting of buckets don't work for buckets with dots in their names.
So I was able to change bucket naming convention to avoid any issues in future
Also facing this issue, in us-east-1
, no dots .
in bucket name and tried different versions of httpclient (4.5.5 - 4.5.13).
@dieggoluis for me it didn't The source of issue was that s3 bucket names with dots like
app.data.xxx
lead to incorrect https certificate resolution
I vaguely remember dots in bucket names definitely cause trouble verifying certificates. You shouldn't use dots. However, I just want to make it clear that some customers do hit this issue even though their S3 bucket names don't contain dots.
I think this might be due to the fact that https://www.publicsuffix.org/list/public_suffix_list.dat contains "s3.amazonaws.com" which means any certificate with "*.s3.amazonaws.com" will be considered overly broad
I've just had this very issue and can confirm that from the errors it's quite obvious that it is linked to dots in the S3 bucket name. It is a standard practice to set your S3 bucket name like a reverse DNS name to ensure no collisions (due to the scope of S3 bucket names being global)... e.g. com.mydomain.mybucket
.
This is causing this very error for me right now, and the error string makes it clear it can't match the list of peer certificates, and this is not surprising since wildcard certificates only support a single level of sub-domains.
The simplest solution, therefore, is to create another bucket where the dots are hyphens which is also unlikely to clash with another global S3 bucket name.
Incase it helps anyone else, I was hitting this error as part of a spark job using org.apache.hadoop.fs.s3a.S3AFileSystem.
Setting these fixed the problem:
spark.hadoop.fs.s3a.endpoint=s3.us-east-1.amazonaws.com
spark.hadoop.fs.s3a.path.style.access=true
I have persistent following problem:
javax.net.ssl.SSLPeerUnverifiedException: Certificate for <myapp.myapp-minio-api.staging.dev.example.com> doesn't match any of the subject alternative names: [*.dev.example.com, *.staging.dev.example.com, dev.example.com]
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.verifyHostname(SSLConnectionSocketFactory.java:507) ~[httpclient-4.5.14.jar:4.5.14]
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:437) ~[httpclient-4.5.14.jar:4.5.14]
My configuration:
software.amazon.awssdk
BOM 2.20.87org.apache.httpcomponents.httpclient
to 4.5.14The code I use to build the S3Client:
private S3Client getClient() {
AwsCredentialsProvider credentialProvider = getCredentialProvider();
S3ClientBuilder builder =
S3Client.builder()
.credentialsProvider(credentialProvider)
.region(getRegion(config.getRegion()));
if (StringUtils.isNotBlank(config.getEndpointUrl())) {
LOG.debug("Using custom endpoint: {}", config.getEndpointUrl());
builder.endpointOverride(URI.create(config.getEndpointUrl()));
}
What surprises me:
https://myapp-minio-api.staging.dev.example.com
. However, the error message displays myapp.myapp-minio-api.staging.dev.example.com
. Why is the bucket name placed before the address? (EDIT: solved by builder.endpointOverride(URI.create(config.getEndpointUrl())).forcePathStyle(true);
)Why I think the bug is in the SDK:
@pwannenmacher from my comment (https://github.com/aws/aws-sdk-java-v2/issues/1786#issuecomment-904148428),
One workaround is to force a very specific Apache version: 4.5.10. ... As people noted here, the issue happens only when using recent Apache versions. That is, 4.5.10 works but 4.5.12 or 4.5.13 fails. ... do note that using an older version prior to 4.5.10 may fail with the same error, so make sure you enforce the right version.
Our customers don't use Java SDK at all, so the issue is fundamentally unrelated to the SDK.
Back then, I meticulously checked the code of all the Apache versions. However, nothing seemed wrong, and I couldn't explain how this could happen. I was very perplexed. Our customers (who don't use the Java SDK) hit this issue only with Amazon S3.
Reproduced this bug with hadoop- 3.3.5 and aws-java-sdk 1.12.310 Fixed by rolling back to versions 3.1.0 (hadoop-) and 1.1.655 (aws-java-sdk-*)
for me this piece of code cause the problem What the hell is going on with the host verifier with the latest dependencies?
I can see host name verifier is initialized with an X509HostnameVerifier
the main difference between implementations (left one causes problem): 1. 2.
seems to me there's a couple of problems being discussed here.
buckets with dots in their name. AWS say "only do this for buckets serving static web page content". If you must do that know that the s3a connector and others really hate it and you must set path style access. But it is best to follow their guidance and use a single word for the bucket name -and make it a valid hostname.
public suffix list contamination. This can arise if there's another JAR on the classpath which includes the mozilla/public-suffix-list.txt
resource and it is out of date -especially with relation to new AWS regions. This problem is hard to replicate as it depends on the order JARs are loaded -test environments may not match production systems.
Fix there: identify the JAR with the class, remove or upgrade it. If that can't be done: cut the file from the jar.
I have recently re-written a service to use the newer AWS SDK (v2), but I am struggling with an error I just can't seem to figure out.
Short snippet:
Description
This service communicates a lot with a few other AWS services, and everything there is fine, but when it is running in production, it seems to have issues writing to customer buckets with the error above.
I have gone ahead and changed this project to only use AWS SDK v1 for S3 with almost identical commands, and it works fine.
I am running in an EC2 instance, using JDK 8, and using the latest version of this library.
Full stack trace