This is going to be a fun one due to a lack of obvious cause but here we go:
Upfront details:
EKS 1.30
Shinyproxy 3.1.0
Istio 1.22
On launching and logging into a shinyproxy session via keycloak, on completion of the login cycle and starting the first application I'm immediately met with the following common error:
Monitoring the kubernetes cluster I can see the pod being scheduled as expected, I can see the pod taking 20-45 seconds to ready up and enter a healthy state - all the while shinyproxy is monitoring the pod readiness state before routing traffic.
The question is; why isn't shinyproxy doing its usual pending loading waiting screen, instead I have to wait until the pod starts and hit retry and the connection to the pod via the iframe is picked up nearly immediately and normal service is resumed.
Attached some debug logs that show the pod startup is being monitored correctly, just not holding up the client in a waiting loop screen as usual.
I would expect the pod to be given up to 75 seconds to start, and then the container to take up to an additional 75 seconds to be healthy before seeing this error.
This debug log has the following steps taken from the user (me):
Log into shinyproxy
Launch an application
Instantly get error saying the pod failed to start
Pod continues to start and get healthy in the background
Pressing retry once the pod is healthy and the application loads as expected
The key take away is as far as I can see the usual machinery and processes are working as normal kubernetes side, it's just shinyproxy itself immediately returning a failed to start app error before waiting out any timeouts for the app to start.
Fixed it, RBAC istio AuthorizationPolicy set to block anything on /api/ paths without a JWT, and shinyproxy is using a spring security session cookie and thus fell foul of this rule.
This is going to be a fun one due to a lack of obvious cause but here we go:
Upfront details:
On launching and logging into a shinyproxy session via keycloak, on completion of the login cycle and starting the first application I'm immediately met with the following common error:
Monitoring the kubernetes cluster I can see the pod being scheduled as expected, I can see the pod taking 20-45 seconds to ready up and enter a healthy state - all the while shinyproxy is monitoring the pod readiness state before routing traffic.
The question is; why isn't shinyproxy doing its usual pending loading waiting screen, instead I have to wait until the pod starts and hit retry and the connection to the pod via the iframe is picked up nearly immediately and normal service is resumed.
Attached some debug logs that show the pod startup is being monitored correctly, just not holding up the client in a waiting loop screen as usual.
The relevant partial
application.yml
shown here:I would expect the pod to be given up to 75 seconds to start, and then the container to take up to an additional 75 seconds to be healthy before seeing this error.
This debug log has the following steps taken from the user (me):
debug.log
The key take away is as far as I can see the usual machinery and processes are working as normal kubernetes side, it's just shinyproxy itself immediately returning a failed to start app error before waiting out any timeouts for the app to start.