Closed egvimo closed 1 year ago
I've workaround this by adding the reactor.netty.pool.maxIdleTime
option to our deployment. If there is nothing else I can do to prevent this exception, the issue can be closed.
It seems to be correct. You need to align the idle timeout with the one from keycloak. See https://github.com/reactor/reactor-netty/issues/1764 for details. TL;DR Keycloak (e.g. tomcat) has an open connection and keeps the connection open for other requests. The requesting client tries to reuse the connection as it seems still open, while the server has already closed the connection. It might be an issue with the keycloak configuration or some k8s thing (e.g. service) which closes the connection.
I'll closes this as it doesn't look like jhipster issue. If you think it is, feel free to open it again.
Overview of the issue
We run our application on a Kubernetes Cluster, which uses a managed Keycloak for Auth. If the application tries to fetch a refresh token we are getting a White Label Error page with the following 500 server error:
Stacktrace
This only happens on the Cluster. I don't know whether this has something to do with Keycloak configuration, because we can not reproduce this locally.
Reproduce the error
Unfortunately I can't reproduce the error locally. This happens only on the Kubernetes Cluster.
Related issues
Maybe #17388?
JHipster Version(s)
7.9.3 (and also with 7.8.1)
JHipster configuration
JDL definitions