Open cernier opened 2 years ago
My quick guess is that the ingress/loadbalancer in your K8s clusters needs to be configured for session affinity aka sticky sessions. This is what start.vaadin.com generates for Spring Boot apps, something similar might work in your setup (I have never used Okteto).
apiVersion: v1
kind: Service
metadata:
name: myapp-balancer
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 8000 # The port to expose to the outside
targetPort: 8080 # The port the application is running on in the pods
type: LoadBalancer
sessionAffinity: ClientIP
Unfortunately, the change you suggested seems to not fix the issue.
I initially didn't think, indeed, of the Docker/Kubernetes files provided by SpringBoot-based skeletons created on start.vaadin.com
.
As they are generated from vaadin/skeleton-starter-flow-spring
template, I forked it and just adapted the Docker/Kubernetes files for a deployment to Okteto (like I did here for the Vaadin-Quarkus)
So that, at the end, the issue root cause seems more general and not only related to Quarkus or Spring Boot support. By the way, do you think I should then open issue on the main Vaadin Flow repository?
Here below are URLs serving UIs showing this wrong behaviour, and whether it's within:
vaadin-quarkus
sample: https://vaadin-quarkus-cernier.cloud.okteto.net, which has been loaded with my 2-dockerized-pods
branch of my fork of vaadin/base-starter-flow-quarkus
(you can deploy to your own Okteto namespace from it)vaadin-spring
sample: https://vaadin-spring-cernier.cloud.okteto.net, which has been loaded with my 2-dockerized-pods
branch of my fork of vaadin/skeleton-starter-flow-spring
(you can deploy to your own Okteto namespace from it)Okteto is just a platform allowing to deploy easily to a Kubernetes cluster in the cloud, especially for basic testing purpose with a free pricing. If you prefer a reproducer on another platform(s), feel free to tell me, I will adapt also to it (them).
No need to create a separate issue, the team can move the issue to Flow repository if they find some issue to work with from here.
I still believe this is an issue in cluster configuration. I quickly used browser inspector and curl against your deployments and it looks like the session cookie changes on every request, even if you provided the session cookie that the server provides in previous request. So either the request goes to a different node on each request (round robin instead of session affinity) or the cookie is dropped by the front proxy.
Here is the output with curl. With properly working setup, the second request shouldn't set the cookie anymore:
mstahv@MatinPikkuRakkine ~ % curl --head https://vaadin-quarkus-cernier.cloud.okteto.net
HTTP/2 200
date: Fri, 18 Feb 2022 08:49:31 GMT
content-type: text/html;charset=utf-8
content-length: 821
set-cookie: JSESSIONID=ngNZQD8kQeravD89N05muhOn0YweOEbROj5dvDPB; path=/
set-cookie: csrfToken=ce5a9570-742e-4679-b285-aa31d292197e; path=/
strict-transport-security: max-age=15724800; includeSubDomains
mstahv@MatinPikkuRakkine ~ % curl --cookie "JSESSIONID=ngNZQD8kQeravD89N05muhOn0YweOEbROj5dvDPB; path=/" --head https://vaadin-quarkus-cernier.cloud.okteto.net
HTTP/2 200
date: Fri, 18 Feb 2022 08:50:02 GMT
content-type: text/html;charset=utf-8
content-length: 821
set-cookie: JSESSIONID=RRUsHVyMzKk2g-sVmbtGpdJRWaizTxYR4nAPUJlA; path=/
set-cookie: csrfToken=36aaa8fd-ad07-458a-ae38-d436373da211; path=/
strict-transport-security: max-age=15724800; includeSubDomains
Yeah, it is round robin. If you make a third request with the initially returned cookie, then it works, but naturally browser then thinks that the session is expired and reloads.
My K8s skills are not at top level, I'll ask if some of our experts could some time to build a working setup for Okteto.
Description of the bug
Despite one of the Quarkus advantages is to deploy and run natively in cloud applications packaged as containerized Docker image, it appears that the Vaadin integration of Quarkus and/or this starter sample is not ready for this purpose.
Indeed, whereas there is no issue when running it packaged as Docker image deployed to a single-pod/replica Kubernetes cluster, if it's deployed to a multi-pods/replicas one, the UI (almost) endlessly reloads so that it's obviously not usable at all. "almost" means that, sometimes, the reload loop stopped and the last load completes fine.
Expected behavior
UI should be usable and loads fine, without such endlessly reloading, whatever the number of pods/replicas of the Kubernetes cluster, like it's the case for just 1:
Deploy directly and quickly on Okteto 🟢 working Dockerized sample (
1-dockerized-pod
branch of my fork):Minimal reproducible example
Deploy directly and quickly on Okteto 🔴 failing Dockerized sample (
2-dockerized-pods
branch of my fork):Versions