Open e30532 opened 5 years ago
hi @e30532, thanks for investigating this issue and opening a PR with a proposed fix!
Even if this performance regression was introduced by APAR PI61592, we can't undo parts of it for the sake of performance since it could break an existing user who is depending on this APAR. Perhaps we can add some further detection to disable this subject check in certain cases (e.g. well-known RAs) or make it a configurable setting in server.xml. I'm going to check with @teddyjtorres who was the developer who initially made this APAR.
In the meantime, could you share the connection pool configuration being used for your scenario? Also, approximately how many users are there? If numUsers > maxPoolSize
then there could be churn in the connection pool because there isn't enough space for each user to have their own connection in the pool. However, if numUsers
is very large (over 500), then it would not be feasible to simply increase maxPoolSize
and we will have to explore alternative options. Setting maxPoolSize >= numUser
may be a quick-fix though.
This code cannot be removed since the principal is needed by resource adapters that can make use of it. Removing this code will cause a regression and potential outages.
@teddyjtorres If not all resource adapters need the principal, would the switch appropriate to help performance?
Hi @aguibert and @teddyjtorres, thank you so much for your comment. I totally agree with that we should not break the system which depends on the fixed code of PI61592.
And I also understand the system which maxPoolSize is greater than numUser, this performance concern is not applicable. Currently, I don't have any customer's who is actually reporting the performance issue. I just wanted to suggest the possibility that we can make liberty faster. The actual performance advantage by fixing this issue would depend on systems.
In my env, I'm just running a quite simple scenario with the configuration below.
In the case of MQ, the container auth credential is enough to connect to MQ and caller principal is not needed, but as you can see, the free connection can't be reused due to the subject mismatch in JCA layer.
[6/21/19 5:28:12:686 PDT] 00000085 ConnectionMan > allocateMCWrapper Entry
[6/21/19 5:28:12:686 PDT] 00000085 FreePool > getFreeConnection Entry
[6/21/19 5:28:12:686 PDT] 00000085 FreePool > getMCWrapperFromMatch Entry
[6/21/19 5:28:12:686 PDT] 00000085 JMSApi > com.ibm.mq.connector.outbound.ManagedQueueConnectionFactoryImpl(ManagedConnectionFactoryImpl)@14f2a339 matchManagedConnections(final Set,final Subject,final ConnectionRequestInfo) ENTRY [com.ibm.mq.connector.outbound.ManagedQueueConnectionImpl@3ca9866b] Subject:
Principal: WSPrincipal:test2
Private Credential: javax.resource.spi.security.PasswordCredential@a31517d5
username: null password: null contextUsage: false isCredentialsSet: false
[6/21/19 5:28:12:686 PDT] 00000085 WSPrincipal > equals Entry
WSPrincipal:test1
[6/21/19 5:28:12:686 PDT] 00000085 WSPrincipal < equals Exit
false
[6/21/19 5:28:12:686 PDT] 00000085 JMSApi 1 ManagedConnectionFactoryImpl matchManagedConnections(...)
returning: null
[6/21/19 5:28:12:686 PDT] 00000085 FreePool < getMCWrapperFromMatch Exit
null
[6/21/19 5:28:12:686 PDT] 00000085 FreePool 3 MCWrapper was not found in Free Pool
[6/21/19 5:28:12:686 PDT] 00000085 FreePool > createOrWaitForConnection Entry
[6/21/19 5:28:12:686 PDT] 00000085 FreePool 3 TotalConnectionCount is 2
Would you please seek the possibility of any resolution which doesn't break anything?
@una-tapa suggested me to open an issue cc: @kgibm
There is a performance concern of com.ibm.ws.security.jca.internal.AuthDataServiceImpl, especially, in Liberty + MQ system where container authentication is used. This might be applicable to other types of ESI, but let me explain the concern by using the MQ case.
When a secure servlet(ServletA) request a connection, a new connection is created. When the connection manager allocate a connection, it associates a subject with the connection. And the subject contains the following information. The credential is created by using the container auth information.
Subject: Principal: login user of ServletA Credential: container-auth info
Then, if other login user requests a connection, the connection matching logic in JCA layer takes the entire subject into consideration. So the existing connection can't be reused for the other login user even if it's in the free pool. So frequent connection creation and destroy will happen in the pool if many login users have a chance to request connections. This is not good from performance perspective.
The credential generated by the container-auth is sufficient for MQ RAR to connect to MQ. So I was wondering why Liberty is adding the caller principal into the subject. When AuthDataServiceImpl create a subject using AuthData, it adds the caller principal in obtainSubject method. It seems this behavior was introduced by an old APAR PI61592 which was opened for a customer's specific resource adapter. But I'm feeling that the Liberty is losing a performance benefit for general use-cases by corresponding to a specific use-case.