Open dsyer opened 10 years ago
Is it the JDBC side that's the problem? I'm sure you could fix it just by adding a cache to the ClientDetailsService
. Do you try that?
If you're using Bcrypt it's supposed to be slow (if it's CPU bound under load then that's what it is).
we added something like this and the client details queries are gone. is this what you meant dave ?
<cache:advice id="cacheClientsAdvice" cache-manager="cacheManager">
<cache:caching cache="OauthClientDetailsServiceCache">
<cache:cacheable method="loadClientByClientId" key="#clientId"/>
</cache:caching>
</cache:advice>
<cache:advice id="cacheEvictClient" cache-manager="cacheManager">
<cache:caching cache="OauthClientDetailsServiceCache">
<cache:cache-evict method="removeClientDetails" key="#clientId" before-invocation="false"/>
</cache:caching>
</cache:advice>
<!-- apply the cacheable behavior to all ClientDetailsService interface methods -->
<aop:config>
<aop:advisor advice-ref="cacheClientsAdvice" pointcut="execution(* org.springframework.security.oauth2.provider.ClientDetailsService.*(..))"/>
</aop:config>
<aop:config>
<aop:advisor advice-ref="cacheEvictClient" pointcut="execution(* org.springframework.security.oauth2.provider.ClientRegistrationService.*(..))"/>
</aop:config>
Yes, that's what I mean. I would also recommend using a cache with a configurable TTL.
Caching the client details would greatly improve overall performance. We have implemented our own OAuth server using Spring-security-oauth and during our load tests, we noticed that for a complete flow in our use case (user provides credentials, gives his approval, client requests a token using an authorization code) almost 30% of that time was spent retrieving the client details! It is a very simple query but if your database is on a remote system, the latency adds up.
It seems obvious to cache the client details by default, since they generally won't change often.
Was this implemented? I'm trying to add caching as @shlomicthailand suggested, but that doesn't solve the issue. It's still the same (one roundtrip about 100 milliseconds what is in case with inmemory 10x slower). If there is some example can someone provide it? Thanks
@dsyer any suggestions? thanks
i realized this too, loadclientdetails is called 6 times from different classes for one client details lookup( implemented mongodb client-details service class, and in memory user details at the moment) , would be really helpful if something is done at framework level to reduce these calls
This ticket deserves higher priority as it can be fully fixed only on the framework level (invoke the loader once, then pass the instance to whoever needs it). The caching to be implemented here is too specific, to delegate it to the general cache mechanism. Thread-local cache will be a serious security bug (because of thread pooling in a servlet container), generic cache is tricky to invalidate properly on any change in credentials or permissions, cache based on request-scoped bean does not work (there's simply no request scope defined yet by the moment of the first invocation).
Why is it necessary to call the database 6 times to perform a client lookup??
I am using the JdbcClientDetailsService and I have noticed that for every request to retrieve an access token there are about 8 calls to the database (loadClientByClientId).
Wanted to get feedback about possible solutions for this.
One thing that came to mind was to use a ThreadLocal to store the ClientDetails and follow a pattern similar to the RequestContextHolder and have some static methods to get access to the ThreadLocal.
Comments:
david_syer on Mon, 30 Dec 2013 10:49:15 +0000