Closed KirioXX closed 3 years ago
It looks like that I found a solution. Every time the ingress closes the connection to the pod, n8n is dumping the session what causes the workflows to fail. I increased now the timeout to keep the connection longer open what seems to solve the problem. š¤ that no one builds workflows that run for longer than 5 minutes.
Can you Open a PR so we have decent initial timeout. I'll make an entry in the readme, mentioning that.
I'm not sure if the timeout fix is worth including in the helm chart, because I used a BackendConfig what is a GKE specific feature for their load balancer.
But I'll open a PR for the change I did to add annotations to the service what makes it possible to add the backend config. I would be also happy to help document the challenges I have had so far with GKE.
I still have some trouble with the workflows, somehow it can't initiate a trigger and just logs over and over that it initiates the trigger.
First things first, thank you very much for this chart it made my life quite a lot easier, because I'm fairly new to Kubernetes and it helped a lot to understand everything a bit better.
My problem is that it seems like n8n is losing its session quite often with error messages on the client like:
and the server logs show:
I'm now not sure if it's related to my Terraform/K8S setup or n8n it self. this is my Terraform config:
and this is my valued file:
Thank you in advance for any help. š