spring-attic / spring-security-saml

SAML extension for the Spring Security project
Other
420 stars 479 forks source link

javax.net.ssl.SSLException: SSLSession was invalid: Likely implicit handshake failure #233

Open sagarmj-cloud opened 6 years ago

sagarmj-cloud commented 6 years ago

I recently tried to upgrade to 1.0.4 release with spring-boot 2.0.0.RELEASE. All my config is java based and I have been getting these errors while executing my unit tests.

03:00:49.926 [Metadata-reload] ERROR org.opensaml.saml2.metadata.provider.HTTPMetadataProvider - Error retrieving metadata from https://some.host.com:9031/pf/federation_metadata.ping?PartnerSpId=urn:xxx:yyy:ccc javax.net.ssl.SSLException: SSLSession was invalid: Likely implicit handshake failure: Set system property javax.net.debug=all for details

And I am receiving this error only with 1.0.4 RELEASE, 1.0.3.RELEASE works just fine and all my unit tests pass. On putting the debug on - I get the following information.

Metadata-reload, IOException in getSession(): javax.net.ssl.SSLHandshakeException: org.springframework.security.saml.trust.UntrustedCertificateException: Peer SSL/TLS certificate 'CN=some.host.com, O=XXXX, L=Glendale, ST=California, C=US' issued by 'CN=Entrust Certification Authority - L1K, OU="(c) 2012 Entrust, Inc. - for authorized use only", OU=See www.entrust.net/legal-terms, O="Entrust, Inc.", C=US' is not trusted, add the certificate or it's CA to your trust store and optionally update tlsKey in extended metadata with the certificate's alias

Follow certificates (in PEM format) presented by the peer. Content between being/end certificate (including) can be stored in a file and imported using keytool, e.g. 'keytool -importcert -file cert.cer -alias certAlias -keystore keystore.jks'). Make sure the presented certificates are issued by your trusted CA before adding them to the keystore. Any help is appreciated.

gstanchev commented 6 years ago

Try setting this -Dorg.opensaml.httpclient.https.disableHostnameVerification=true and report back if it fixed your issue

tryge commented 6 years ago

Actually I encountered this issue and debugged (a lot of time) to find that, as far as I understand, it can't possibly work, let me explain:

In TLSProtocolSocketFactory the trustEngine is initialized with a name evaluator but no trusted names are configured. The StaticPKIXValidationInformationResolver signals it can provide trusted names (by returning true in supportsTrustedNameResolution) and returns an empty set for the trusted names, which subsequently fails the check in the configured BasicX509CredentialNameEvaluator which checks for the empty set and just fails name validation.

Simply omitting the name evaluator when constructing the trust engine will fix the issue.

tryge commented 6 years ago

Just for reference, setting the system property doesn't help, as it doesn't even get to the point to actually perform the hostname verification.

ShostakRV commented 6 years ago

I have the same issue in version 1.0.4 ... is there any work around? or proper fix?

activx commented 6 years ago

I have the same issue!

OrangeDog commented 5 years ago

Related SO question.

OrangeDog commented 5 years ago

Still present in 1.0.9.RELEASE. I'm not sure which yet, but it either starts happening after some indeterminate time, or it only happens on refresh. The first attempt to fetch the IdP metadata must've succeeded because everything else is working. I'll have some debug output to share once it starts happening again.

fhanik commented 5 years ago

Do any of you have an IDP that I can use to reproduce this error? I would need this in order to debug the behavior. Feel free to email me fhanik@pivotal.io

OrangeDog commented 5 years ago

@fhanik I'm using this one for testing https://www.ssocircle.com but it happens on my main one too.

fhanik commented 5 years ago

@OrangeDog Can you post your actual config (XML) with the metadata URL in it? (trying to skip the time of me configuring something vastly different than your, and thus not even troubleshooting your real issue)

OrangeDog commented 5 years ago

@fhanik what config? Do you mean my SP metadata? You won't be able to use it because you don't have my private keys.

fhanik commented 5 years ago

just your metadata URLs that you have configured. and the XML for that. the error should happen before the private key is ever needed.

OrangeDog commented 5 years ago

Shared privately. Looks like, even though the certificates are in the configured keystore and their CAs are in the system cacerts, it just stops trusting them.

Timer-0, IOException in getSession(): javax.net.ssl.SSLHandshakeException: org.springframework.security.saml.trust.UntrustedCertificateException: Peer SSL/TLS certificate 'CN=idp.ssocircle.com' issued by 'CN=Let's Encrypt Authority X3, O=Let's Encrypt, C=US' is not trusted, add the certificate or it's CA to your trust store and optionally update tlsKey in extended metadata with the certificate's alias

OrangeDog commented 5 years ago

N.B. there's a typo to fix in that error message ;)

fhanik commented 5 years ago

Simply omitting the name evaluator when constructing the trust engine will fix the issue.

That's from a previous comment as a workaround.

I took the sample and increased the refresh frequency for the metadata update

        <property name="maxRefreshDelay" value="2000"/>
        <property name="minRefreshDelay" value="500"/>

But I'm not getting any errors at all. It's doing a handshake every 1 to 2 seconds.

<bean id="metadata" class="org.springframework.security.saml.metadata.CachingMetadataManager">
        <constructor-arg>
            <list>
                <!-- Example of classpath metadata with Extended Metadata -->
                <bean class="org.springframework.security.saml.metadata.ExtendedMetadataDelegate">
                    <constructor-arg>
                        <bean class="org.opensaml.saml2.metadata.provider.ResourceBackedMetadataProvider">
                            <constructor-arg>
                                <bean class="java.util.Timer"/>
                            </constructor-arg>
                            <constructor-arg>
                                <bean class="org.opensaml.util.resource.ClasspathResource">
                                    <constructor-arg value="/metadata/idp.xml"/>
                                </bean>
                            </constructor-arg>
                            <property name="parserPool" ref="parserPool"/>
                        </bean>
                    </constructor-arg>
                    <constructor-arg>
                        <bean class="org.springframework.security.saml.metadata.ExtendedMetadata">
                        </bean>
                    </constructor-arg>
                </bean>
                <!-- Example of HTTP metadata without Extended Metadata -->
                <bean class="org.opensaml.saml2.metadata.provider.HTTPMetadataProvider">
                    <!-- URL containing the metadata -->
                    <constructor-arg>
                        <value type="java.lang.String">https://idp.ssocircle.com/idp-meta.xml</value>
                    </constructor-arg>
                    <!-- Timeout for metadata loading in ms -->
                    <constructor-arg>
                        <value type="int">15000</value>
                    </constructor-arg>
                    <property name="parserPool" ref="parserPool"/>
                    <property name="maxRefreshDelay" value="2000"/>
                    <property name="minRefreshDelay" value="500"/>
                </bean>
                <!-- Example of file system metadata without Extended Metadata -->
                <!--
                <bean class="org.opensaml.saml2.metadata.provider.FilesystemMetadataProvider">
                    <constructor-arg>
                        <value type="java.io.File">/usr/local/metadata/idp.xml</value>
                    </constructor-arg>
                    <property name="parserPool" ref="parserPool"/>
                </bean>
                -->
            </list>
        </constructor-arg>
        <!-- OPTIONAL used when one of the metadata files contains information about this service provider -->
        <!-- <property name="hostedSPName" value=""/> -->
        <!-- OPTIONAL property: can tell the system which IDP should be used for authenticating user by default. -->
        <!-- <property name="defaultIDP" value="http://localhost:8080/opensso"/> -->
    </bean>

Are you configuring a different socket factory than the one in the sample?

OrangeDog commented 5 years ago

Simply omitting the name evaluator when constructing the trust engine will fix the issue.

Sure it'll (probably) stop the errors but it reduces security.

I'm using Java configuration. These should be all the relevant beans. Errors don't start until after about three hours, after which they're every ten minutes or so, for every HTTPMetadataProvider.

private final Timer backgroundTaskTimer = new Timer(true);
private final MultiThreadedHttpConnectionManager connectionManager = new MultiThreadedHttpConnectionManager();

@Bean(initMethod = "initialize")
public StaticBasicParserPool parserPool() {
    return new StaticBasicParserPool();
}

@Bean
public HttpClient httpClient() {
    return new HttpClient(this.connectionManager);
}

@Bean
public TLSProtocolConfigurer tlsProtocolConfigurer() {
    return new TLSProtocolConfigurer();
}

@Bean
public KeyManager samlKeyManager(ResourceLoader loader) {
    return new JKSKeyManager(
            loader.getResource(properties.getKeystorePath()),
            properties.getKeystorePassword(),
            Collections.singletonMap(properties.getKeyAlias(), properties.getKeyPassword()),
            properties.getKeyAlias()
    );
}

@Bean
public CachingMetadataManager metadataManager() throws MetadataProviderException {
    List<MetadataProvider> providers = new ArrayList<>(properties.getIdp().size());
    for (URL url : properties.getIdp()) {
        HTTPMetadataProvider provider = new HTTPMetadataProvider(backgroundTaskTimer, httpClient(), url.toString());
        provider.setParserPool(parserPool());
        provider.initialize();
        providers.add(provider);
    }
    CachingMetadataManager manager = new CachingMetadataManager(providers);
    manager.setRefreshCheckInterval(Duration.ofHours(1).toMillis());
    return manager;
}
tryge commented 5 years ago

Sure it'll (probably) stop the errors but it reduces security.

Only if there's something configured, which it wasn't at the time when I looked at it.

returns an empty set for the trusted names

OrangeDog commented 5 years ago

Solved my problem. I was calling provider.initialize() too early. This should not be called manually, but should be left to the MetadataManager. Otherwise the first fetch is done using the system trust store (which works) instead of the configured one.

Removing the TLSProtocolConfigurer causes it to always use the system trust store, so I didn't bother working out what was wrong with my own one.

fhanik commented 5 years ago

Thanks @OrangeDog, That makes sense. I ran an 8 hour test and didn't encounter any errors.

johnloven commented 5 years ago

I solved this by removing some of my unused Java beans that I'd copy pasted from some guide.

ptanwar27 commented 5 years ago

Do we have resolution for this bug? @OrangeDog @fhanik @johnloven

Could you please help me with this issue. It was working fine with 1.0.3 version and I upgraded it to 1.0.9. I tried both the solution that you guys mentioned but no luck. My idp is adfs and I am trying to load idp metadata through url. My bean configs are mentioned below :

@Bean

public ExtendedMetadataDelegate adfsExtendedMetadataProvider() throws MetadataProviderException {
    Timer backgroundTaskTimer = new Timer(true);
    HTTPMetadataProvider httpMetadataProvider = new HTTPMetadataProvider(backgroundTaskTimer, new HttpClient(),samlServerConfig.getMetadataPath());
    httpMetadataProvider.setParserPool(parserPool());
    httpMetadataProvider.initialize();

    ExtendedMetadataDelegate extendedMetadataDelegate = new ExtendedMetadataDelegate(httpMetadataProvider, extendedMetadata());
    extendedMetadataDelegate.setMetadataTrustCheck(false);
    extendedMetadataDelegate.setMetadataRequireSignature(false);
    return extendedMetadataDelegate;
}

@Bean
@Qualifier("metadata")
public CachingMetadataManager metadata() throws MetadataProviderException {
    List<MetadataProvider> providers = new ArrayList<MetadataProvider>();
    providers.add(adfsExtendedMetadataProvider());
    CachingMetadataManager manager = new CachingMetadataManager(providers);
    manager.setRefreshCheckInterval(Duration.ofHours(1).toMillis());
    return manager;
}
johnloven commented 5 years ago

@ptanwar27 Try removing httpMetadataProvider.initialize();

nilscodes commented 4 years ago

I have the same issue, and I do not call initialize too early. Might try to reproduce a scenario tomorrow but as soon as I use a custom KeyManager for trust resolution, my test fails. I can even see that it is using the right keystore internally, but still doesn't trust the certificate.

Pretty much debugged down to what @tryge was seeing, that it doesn't even get to hostname verification. It dies on a previously (in the openws version that spring-saml2 1.0.3 used) not present call to sslSession.isValid() in the non-spring TLSProtocolSocketFactory.

johnloven commented 4 years ago

@npeuser TLSProtocolSocketFactory (or actually TLSProtocolConfigurer) was one of the bean declarations I removed to get it to work.

ol2ka commented 4 years ago

Any new on this issue? I have exactly the same problem and it prevents me from upgrading from 1.0.3. With 1.0.3 same code works, with 1.0.9 - not. As I can see it was already a problem in 1.0.4. No plans to fix it? (Removing TLSProtocolConfigurer doesn't help)

OrangeDog commented 4 years ago

@ol2ka

This repository is not being actively maintained, but will remain hosted for educational and reference purposes.

For this issue, as others have noted, it's always turned out to be a configuration problem. There may not be a bug.

fhanik commented 4 years ago

@OrangeDog Thanks James, I agree. I haven't received enough information to reliably reproduce it. So it is hard to fix until then.

chubbard commented 4 years ago

I agree with @tryge and @npeuser. I am having this same problem. Among other issues because certificates are loaded once and only once at startup which is a big problem too.

But, in order to see this problem you have install the TLSProtocolConfigurer bean which causes it to register a global Socket factory for the HttpClient which never works because it fails during host name evaluation because no trustedNames are installed. The documentation says that if you do not provide any trusted names then it will use all of the keys returned by the KeyManager.

But when you get into BasicX509CredentialNameEvaluator.evaluate() it quits and returns false when trustedNames are null or empty. And that is always null because TLSProtocolSocketFactory.getPKIXResolver() initializes StaticPKIXValidationInformationResolver with a hard coded null for trustedNames!

That's why people are saying it will never work. I think the only way it works is by not using TLSProtocolConfigurer which explains why @OrangeDog and @johnloven made it work by removing unused beans. It has nothing to do with not calling initialize to soon unless you are trying to initialize within a bean start method before everything is built.

chubbard commented 4 years ago

I was able to fix it by rewriting TLSProtoConfigurer and TLSProtocolSocketFactory. I think at this point I can provide a pull request to fix those parts of it.

jzheaux commented 4 years ago

@chubbard, is this something that you are still interested in submitting a PR for?

Also, based on your description, it appears that you are able to get the behavior all the time instead of intermittently. If that's the case, it would be nice if your PR included some tests that fail before your change and succeed afterward.

chubbard commented 4 years ago

@jzheaux Yes I wanted to finish the submission. I ran into some tests that failed, but I wasn't able to figure out why they were failing because I wasn't sure what they were testing. II suspected that the tests were coded in a way to enforce the problematic behavior. But I ran out of time working on it as to why I didn't finish it. Might need help on what those tests were trying to show.

chubbard commented 4 years ago

@jzheaux So I've looked a little more into the errors I'm having after my changes were applied. The tests that are failing are the MetadataManagerSignatureTest.testSignature_chain_CA and testSignature_validCA. So I rolled my changes back and tested it. The tests succeeded but only because it never checks the actual certificates for validity. After my changes are applied the test actually run the trust verification process, and it fails because the certificates in the JKS keystore don't match the certificates being used in the test.

At this point I'm not sure about the certificate setup here. Whatever created these certificates doesn't appear to be included. I'm not completely sure about the cert chain either, but we'd need at least something from the cert chain imported in the MetadataManager in order for these tests to work. Any ideas?

chubbard commented 4 years ago

@jzheaux Any feedback? You asked for some additional tests, but what I've found is that the tests weren't testing what the author thought. They were succeeding but that's only because the certificate checking logic was not executed. Once I added my code it exposed this oversight in test configuration because the certs in the keystore are not the cert being presented. The certificate checking only checks the certificate being present against the certs in the keystore. It doesn't seem to walk the cert chain to see if it trusts the issuer chain of the cert. So either the test isn't setup correctly or it's not checking what was thought. Some guidance would be appreciated.

jzheaux commented 4 years ago

Some things do come to mind after digging into the two unit test failures, @chubbard.

First, neither MetadataManager nor TLSProtocolSocketFactory currently support name evaluation.

This is clear from the fact that MetadataManager passes null for trustedNames when it constructs its PKIXValidationInformationResolver:

return new PKIXSignatureTrustEngine(
        getPKIXResolver(provider, trustedKeys, null),  // <-- null trusted names
        Configuration.getGlobalSecurityConfiguration().getDefaultKeyInfoCredentialResolver(),
        new org.springframework.security.saml.trust.CertPathPKIXTrustEvaluator(pkixOptions),
        new BasicX509CredentialNameEvaluator());

And, as you've already observed, TLSProtocolSocketFactory passes null into StaticPKIXValidationInformationResolver.

Second, this started breaking in 1.0.4 because of an OpenSAML change that Spring Security picked up in that release.

I followed the link supplied in the MetadataManager comments, and learned that OpenSAML's BasicX509CredentialNameEvaluator underwent a significant change from "always trust" to "never trust" when trustNames == null. In that same commit, PKIXX509CredentialTrustEngine and PKIXSignatureTrustEngine were both changed to skip name evaluation when trustNames == null.

Looking then at MetadataManager, it's clear from 006be6763b05ba3d040da24e177d7620f81d0b2f that StaticPKIXValidationInformationResolver was intentionally subclassed to support the old behavior of disabling name evaluation.

But, there was no corresponding change made to TLSProtocolSocketFactory, which I think was missed due to lack of testing in that area. I imagine this is why it started to fail between 1.0.3 and 1.0.4.

To confirm all of the above, @sagarmj76, a minimal sample that reproduces the issue would be really helpful. Are you able to provide one? I think you said you had some unit tests that started breaking.

Third, based on my research, adding support for trust names would be a feature, not a bug fix. Since the project's in maintenance mode, I'd instead recommend changing TLSProtocolSocketFactory to handle trust names in the same way that Spring Security SAML 1.0.3 did, and like MetadataManager currently does.

At the same time, it would be helpful to introduce some testing around TLSProtocolSocketFactory, at least for its default setup.

@chubbard, since you've already begun the process, would you be interested in submitting a PR that restores how TLSProtocolSocketFactory behaved in 1.0.3?

Lastly, for those who need name evaluation, I believe it can be achieved by sub-classing TLSProtocolSocketFactory and MetadataManager (or CachingMetadataManager), overridding their respective getPKIXResolver methods.

Or better yet, log an issue to Spring Security proper, where new SAML features are going, and we can take a look at adding it there.

chubbard commented 4 years ago

Hi @jzheaux thank you for getting back with me. That was very informative, and it makes a lot of what I've been doing and struggling with make sense now. Particularly the OpenSAML breaking changes picked up in 1.0.3. I don't always see/understand the dividing line between these two projects.

So I reviewed my changes and I think what I've done is made name evaluation feature work for TLSProtocolSocketFactory and MetadataManager. FWIW I use these modifications, along with unrelated modifications, in a production system, and it's working great. But, it may be because of my specific situation that this works as I required importing the presented certificate of the IDP into our keystore vs trusting it because it's signed by a trusted cert (It's this later situation the unit tests try to test I think). That being said there may still be things that need to happen in order to support the unit tests as they are written. But, I think that is inside OpenSAML.

BasicX509CredentialNameEvaluator looks for presented certificate in the trusted names. But the unit test don't explicitly add that certificate to the the trusted names. It only adds the certs in the JKS keystore (CN=Test CA, CN=apollo, CN=SAML Extension Test CA). Now it turns out the certificate being presented is issued by one of the certificates in the keystore (CN=Test CA). But BasicX509CredentialNameEvaluator will never see that because it doesn't walk the certificate chain of the presented certificate. So it rejects it. Therein lies the crux of the problem. The unit tests are testing a use case OpenSAML just doesn't seem to support.

So first question is have I analyzed the purpose of the unit test correctly? In that they are trying to see if a presented certificate that is signed by a trusted name is in fact trusted? Secondly given that OpenSAML seems to be in the driver's seat regarding the decision to trust a certificate or not is this a supported use case? Should those unit tests be turned off until Open SAML changes how it evaluates trusted names?

As far as going backwards my original experience with TLSProtocolSocketFactory is that it breaks at the JSSE level when trust names are null. That's where all of this started. Even in 1.0.3 I had encountered those problems. If you skipped installed TLSProtocolSocketFactory then you could get it to work which just used CA certs as usual for trust. I think if you wanted it to work you'd have to simply install a Trust Everything TrustManager which will have security issues.

jzheaux commented 4 years ago

I think the relevant code is in PKIXX509CredentialTrustEngine and PKIXSignatureTrustEngine where they both do:

if (trustedNames == null) {
            log.debug("Trusted names was null, signalling PKIX resolver does not support trusted names resolution, " 
                    + "skipping trusted name evaluation");
           return true;
}

which is why MetadataManager tries extra-hard to return null, disabling that check.

In that they are trying to see if a presented certificate that is signed by a trusted name is in fact trusted?

Yes, I believe that the validCA test is wanting to demonstrate that metadata signed by a CA is trusted if that CA is in the SP's keystore. chain_CA is wanting to demonstrate that metadata signed with an end certificate will be accepted if the end certificate's chain leads to a CA cert that's in the SP's keystore.

If you debug into it, the CNs indeed don't match, so this may be an oversight on the part of the tester. However, it's probably immaterial for this test since it appears by the commit history that Spring Security SAML has always disabled trust name evaluation.

Should those unit tests be turned off until Open SAML changes how it evaluates trusted names?

There are no further upgrades to OpenSAML planned for this project. All of that work has been moved to Spring Security proper. However, let me try and address your question in another way:

First, I believe the unit tests will continue to work if the only change we make is to have TLSProtocolSocketFactory subclass StaticPKIXValidationInformationResolver to disable trust name evaluation. Tests could confirm this, but I believe this will restore the previous behavior, meaning that at least we are no worse off than before.

Or second, I believe they'd also start working if we 1. merge your changes and 2. update the tests so their payloads are signed with an appropriate cert (one where the CN matches). I hesitate to do this since Spring Security SAML's intention to disable trust name evaluation seems quite clear.

But, it may be because of my specific situation that this works as I required importing the presented certificate of the IDP into our keystore vs trusting it because it's signed by a trusted cert

One thing that's a bit mystifying in OpenSAML is why trust names are a separate parameter in the first place in StaticPKIXValidationInformationResolver. Why not derive it from the other information given? Since they are separate constructor parameters, it makes me wonder if deriving them by default is reasonable. (If it were reasonable, it seems that the resolver would just loop through the certs itself and add the trust names for you.)

Perhaps you'd be interested in introducing a sample that demonstrates how to configure Spring Security SAML to add the trust names? My guess is that this can be achieved by subclassing MetadataManager and TLSProtocolSocketFactory and overriding their getPKIXResolver methods, though it sounds like you may also have another way you've done that in your production environment.

As far as going backwards my original experience with TLSProtocolSocketFactory is that it breaks at the JSSE level when trust names are null.

Do you have a sample that confirms this? I believe the PKIXX509CredentialTrustEngine will pass it when trust names are null, but I think you are saying that it will fail further downstream.

I think if you wanted it to work you'd have to simply install a Trust Everything TrustManager which will have security issues.

I thought that's what Spring Security SAML's X509TrustManager was for. It'll use a trust engine that has the trust name evaluation disabled. What am I forgetting?

chubbard commented 4 years ago

As far as going backwards my original experience with TLSProtocolSocketFactory is that it breaks at the JSSE level when trust names are null. Do you have a sample that confirms this?

This will take a little while to mock it up, but I will try to create one. In snooping the code I think I know why...

I had to remind myself why we were configuring the TLSProtocolConfigurer which leads to this original problem. And the reason was we are a multi-tenant solution, and wanted to allow tenants to provide their own key stores / trust managers (don't fixate on the multi-tenant thing I know what the docs say, I've made lots of other changes to support this ;-). The documentation lead me to believe that it behaved as you described that trust is derived from the key store provided, but the trust names are NOT derived from keys. But that's not the whole story.

It's complicated to explain, but passing null into StaticPKIXValidationInformationResolver doesn't turn off trust name evaluation. The reason is because inside StaticPKIXValidationInformationResolver it recognizes the null and internally sets trustNames = Collections.EMPTY_SET. It also returns true for supportsTrustedNameResolution Now if it returned false this might be avoided since what you say is that it doesn't support trust name evaluation. But it continues...

Then inside PKIXX509CredentialTrustEngine this is where changing trustNames from null to Collections.EMPTY_SET hits the wall. Inside checkNames this is the code:

   } else if (trustedNames == null) {
        log.debug("Trusted names was null, signalling PKIX resolver does not support trusted names resolution, " 
                + "skipping trusted name evaluation");
       return true; 
    } else {
        return credNameEvaluator.evaluate(untrustedCredential, trustedNames);
    }

Since StaticPKIXValidationInformationResolver returns empty set it causes the credNameEvaluator to run our old friend BasicX509CredentialNameEvaluator. Doh!!! Since BasicX509CredentialNameEvaluator doesn't support looking up the certificate chain this is going to fail whenever real SSL certs are signed by real CAs, because the actual certificates aren't in the keystore! Doh!! Doh!! Even if the trust names are passed through this will die too when trust names are only the parents of certificates because of of BasicX509CredentialNameEvaluator deficiencies. Doh!! Doh! Doh!

I think this is only working for my code because I make the tenant import the presented certificate before we connect. This is a pretty strong argument for raising this with OpenSAML, or implementing a different SSL TrustManager not based on OpenSAML if they refuse to fix it.

I thought that's what Spring Security SAML's X509TrustManager was for. It'll use a trust engine that has the trust name evaluation disabled. What am I forgetting?

All of the above logic is all connected to the TLSSocketFactory as the SSL TrustManager installed at the JSSE level. So it's not a simple Trust All Policy for TrustManager. The combination of PKIXX509CredentialTrustEngine + StaticPKIXValidationInformationResolver + BasicX509CredentialNameEvaluator doesn't fully support real trust name verification, and it doesn't support turning trust off either (in combination).

There are no further upgrades to OpenSAML planned for this project.

Well I took a look at OpenSAML future versions 3.0.0, 3.x latest, and 4.x latest, and those future versions don't support trusting a certificate name even if the parent certificate's CN, Alt CN, etc are in the trustNames. So even OpenSAML doesn't support what the unit tests are testing for. In fact it looks like they haven't touched that code in future versions.

Which means if these tests have to remain then Spring Security SAML can never make trusted name evaluation work. On account of it ultimately boils down to BasicX509CredentialNameEvaluator supporting this use case, and in almost a decade the OpenSAML team has never tried to implement that. I do not know their reason. It might be something interesting to ask them as I'm somewhat in your camp in that it should. But, I'd be interested to hear their reason for or against that change.

bbende commented 3 years ago

For anyone trying to work around this, you can create your own SecureProtocolSocketFactory

public class CustomTLSProtocolSocketFactory implements SecureProtocolSocketFactory {

    private static final Logger log = LoggerFactory.getLogger(CustomTLSProtocolSocketFactory.class);

    private final SSLSocketFactory sslSocketFactory;

    public CustomTLSProtocolSocketFactory(final SSLSocketFactory sslSocketFactory) {
        this.sslSocketFactory = sslSocketFactory;
    }

    @Override
    public Socket createSocket(Socket socket, String host, int port, boolean autoClose) throws IOException, UnknownHostException {
       return sslSocketFactory.createSocket(socket, host, port, autoClose);
    }

    @Override
    public Socket createSocket(String host, int port, InetAddress localAddress, int localPort) throws IOException, UnknownHostException {
        return sslSocketFactory.createSocket(host, port, localAddress, localPort);
    }

    @Override
    public Socket createSocket(String host, int port, InetAddress localAddress, int localPort, HttpConnectionParams params) throws IOException {
        if (params == null) {
            throw new IllegalArgumentException("Parameters may not be null");
        }
        int timeout = params.getConnectionTimeout();
        if (timeout == 0) {
            return sslSocketFactory.createSocket(host, port, localAddress, localPort);
        } else {
            Socket socket = sslSocketFactory.createSocket();
            SocketAddress localaddr = new InetSocketAddress(localAddress, localPort);
            SocketAddress remoteaddr = new InetSocketAddress(host, port);
            socket.bind(localaddr);
            socket.connect(remoteaddr, timeout);
            return socket;
        }
    }

    @Override
    public Socket createSocket(String host, int port) throws IOException, UnknownHostException {
        return sslSocketFactory.createSocket(host, port);
    }
}

You just need to create your own SSLContext and then call get sslContext.getSocketFactory().

Once you have created an instance of this custom class, you can then register it just like the configurer was doing:

ProtocolSocketFactory socketFactory = new CustomTLSProtocolSocketFactory(sslSocketFactory);
 Protocol p = new Protocol("https", socketFactory, port);
 Protocol.registerProtocol(p.getScheme(), p);
derkoe commented 3 years ago

We have the same problem and it only occurs with Java 11 (tried AdoptOpenJDK and Zulu) - Java 8 works without issues.

derkoe commented 3 years ago

The reason ist the opensaml version - 2.6.4 works, 2.6.6 not (both versions are EOL)

manjosh1990 commented 2 years ago

I solved this by removing some of my unused Java beans that I'd copy pasted from some guide.

it worked for me as well