Closed warnonphilippe closed 6 years ago
This is the nature of access tokens. That's why you should make sure they're short-lived.
Thanks Matt. I agree with it. That's why I proposed to check the token locally, so it could work on the same way on each call. If I've got a token, I can access, no matter the state of a session. So users doesn't phone to helpdesk and say "I can't access to menu A, but menu B is ok, so I'm sure I'm still log in"
I believe Spring Security handles this automatically when jwk.key-set-uri
is set. We recently changed to use this for microservices.
Can you try setting it and see if it solves your problem?
I tried, same problem, tokens are still validated on the server I put this config under security.oauth2.resource: filter-order: 3 user-info-uri: http://localhost:9080/auth/realms/{realm}/protocol/openid-connect/userinfo token-info-uri: http://localhost:9080/auth/realms/{realm}/protocol/openid-connect/token/introspect prefer-token-info: false
jwk.key-set-uri: http://localhost:9080/auth/realms/jhipster/protocol/openid-connect/certs
FYI, My app is multitenant, so I use {realm} to inject the realm associated with the current tenant, I 've defined a custom bean to inject it. @Bean(name = "multiResourceServerProperties") @Primary public ResourceServerProperties multiResourceServerProperties(){ return new MultiResourceServerProperties(); }
Are you saying you want to do validation on the client? Currently, the client know nothing about OIDC. It’s all handled by Spring Security on the server. This implementation uses cookies and is generally viewed as more secure than a client implementation that uses local storage.
On Mar 20, 2018, at 08:59, warnonphilippe notifications@github.com wrote:
I tried, same problem, tokens are still validated on the server I put this config under security.oauth2.resource: filter-order: 3 user-info-uri: http://localhost:9080/auth/realms/{realm}/protocol/openid-connect/userinfo token-info-uri: http://localhost:9080/auth/realms/{realm}/protocol/openid-connect/token/introspect prefer-token-info: false
jwt.key-uri: http://localhost:9080/auth/realms/jhipster
jwk.key-set-uri: http://localhost:9080/auth/realms/jhipster/protocol/openid-connect/certs
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
Yes totally agree with @mraible
If you have the tokens on the client, if anybody grabs them (the refresh token for instance), then your account is compromised for a long time. If you store the tokens in the session, the security risk is much lower.
Of course, that means the application is stateful, which can cause issues when scaling down (clients will be logged out when they lose their sessions). That's why we should replicate them with Hazelcast, but that's not done yet. ( @mraible we should establish a task force on this, wdyt?)
Sorry I was not clear, end of day work in Belgium I would like a validation on the called rest ressource (a microservice), not on the client. Actually the validation is on the keycloak server, the resource (the microservice instance) received the token and call keycloak to validate the token. I proposed a validation on the microservice (well on the server ;), using the key of keycloak.
@warnonphilippe I just assumed Spring Security was already doing this. I'm happy to improve things. Note that https://github.com/jhipster/generator-jhipster/pull/7120 removes some manual logic to get a public key and set a verifier key. I'm guessing you still have this in your codebase?
@mraible I regenerated a new project (Gateway + 2 microservices) with JHipster 4.14.1 Same behavior that my initial project. The tokens are checked on OIDC server the first time and then cached. The code removed by #7120 is still in the codebase, so I retried to put jwk.key-set-uri, but same behavior, token still checked on OIDC server and then cached.
@warnonphilippe Thanks for taking the time to look into this and confirm the latest configuration doesn't fix things. Unfortunately, I won't have time to dig in and try to fix this until after my upcoming vacation. I'll be offline from March 23 - April 3.
@mraible Thank you for your help, but most important, enjoy your vacation !
@warnonphilippe I know it might be difficult to do, but can you check the code references in https://github.com/jhipster/generator-jhipster/issues/7281 to see if it fixes this issue?
@mraible I still see this class in the code posted on github.
public class CachedUserInfoTokenServices extends UserInfoTokenServices {
private final Logger log = LoggerFactory.getLogger(UserInfoTokenServices.class);
public CachedUserInfoTokenServices(String userInfoEndpointUrl, String clientId) { super(userInfoEndpointUrl, clientId); }
@Override @Cacheable("oAuth2Authentication") public OAuth2Authentication loadAuthentication(String accessToken) throws AuthenticationException, InvalidTokenException { log.debug("Getting user information from OpenID Connect server"); return super.loadAuthentication(accessToken); } }
Am I on a wrong version ?
My problem is caused by the use of Cacheable, the service should call keycloak everytime (or never) to check the token, but it calls only the first time, so if the session is killed after the first time, it's still valid in the cache when the second call occurs.
Disabling jwt-key didn't fix the problem, I already commented it in my code because I need to inject the tenant information in the url and I didn't have a tenant when the jwt-key url is used.
Please send a pull request to make the change you'd like to see. I'd be happy to review and test.
@warnonphilippe ( preliminary note : contrary to "uaa" authenticationType, in Jhipster' "Oauth2" authenticationType, the access_token is NOT considered a JWT by the gateway or the microservices, they have to call the IdentityProvider to check the identity of the user or the validity of the token)
I agree that the mixed behaviour (some microservices cal are ok and other are not) is not a good thing but defining a globally good solution is difficult
It all depends on your needs but generally disabling the cache on the microservice would make a lot more call to the IdentityProvider, ( that being said the current cache strategy should probably be tuned )
Also, killing the session on Keycloak doesn't necessary means you want to log the user out (and if so, some Single Log Out strategy must be set up => typically to kill the session on the gateway too here. I don't know if Keycloak propose something to do this, but that would be the cleaner way)
( if you considered to propose an evolution, check the cleaning being made here : #7666 : please note : we are removing CachedUserInfoTokenServices from the gateway, but this will not make the gateway validate the token on each request : on the gateway Spring Security calls UserInfoTokenServices only at the connection time, and after it depends on the identity stored in the session )
Hi @farrault thanks for you response, I agree that disabling the cache isn't a good thing, except on a small system. (*sol1)
So the only choice to avoid the mixed behaviour, would be to check the token by the service. But as you described it in your note, the access_token is not a JWT, so it can't be check locally. I think Oauth2 doesn't describe the token's structure, so it could be a jwt check locally but it would remove the abality to revoke the token by the server. (*sol2)
Theorically, the mixed behaviour is probably not a good thing, but in practice, I think it's better than a solution with more calls or a solution without revocation. So I think the best solution is to keep the current behaviour, hoping someone will have a better idea, maybe @mraible ?
Just to be clear : HERE the access_token IS technically a JWT, but the gateway and the microservices are not coupled with this specificity : it is opaque for them, and so have to call the IdP The problem here is that the /userinfo endpoint doesn't returns the token expiring time (I think) : that would allow to cache it only for the correct lifetime.
That being said, in fact, with the refresh of the token implemented in #7666 your problem may be less visible because the gateway will not allow anymore a call to microservices with an expired token.
@farrault Thanks for your precision, even you were already clear, but my answer was not. Just to be clear too, the problem occured when accessing a service with a token linked to a killed session. So it can occurs just after killing the session and before the token expired. Thanks to #7666, this time will be very short. I think the issue can be closed.
Overview of the issue
When I kill a user's session in keycloak, the user call still access the services used previously with the same token. However, others services can't be access.
When accessing a service, the jhipster generated code calls the OIDC provider to check if the token is valid. Then, it stores the token's infos in its cache and uses it on the next call with the same token. Good idea to avoid traffic between service and keycloak, but if the session is killed after the first call, the token is still valid for the second, but a call to another service with the same token will fail.
Motivation for or Use Case
I think the user would expect 2 possibilities after its session has been killed, he can access all services until the token expired OR ha can't access anything. But here, he can access only some services.
I'm maybe wrong, but I think that jwt is stateless and OAuth2 allows to check token locally, so jhipster could decode the token locally with the public key of the OIDC server. It would be more coherent for the user.
Reproduce the error
Suggest a Fix
Check token locally with public key of the OIDC server
JHipster Version(s)
generator-jhipster@4.13.3
JHipster configuration, a
.yo-rc.json
file generated in the root folder.yo-rc.json file
JDL for the Entity configuration(s)
entityName.json
files generated in the.jhipster
directoryJDL entity definitions
Environment and Tools
java version "1.8.0_161" Java(TM) SE Runtime Environment (build 1.8.0_161-b12) Java HotSpot(TM) Client VM (build 25.161-b12, mixed mode)
git version 2.15.0.windows.1
node: v9.1.0
npm: 5.5.1
yeoman: 2.0.0
yarn: 1.3.2
Docker version 17.12.0-ce, build c97c6d6
docker-compose version 1.18.0, build 8dd22a96
Browsers and Operating System
Windows 10, OS X Chrome, Firefox