Closed DanPhonovation closed 1 year ago
I am not sure if n8n can be scaled, because of all the cron and trigger jobs. While on the k8s side we can do some tweaks to make it work I fear we need to check first how hwo horizontally scale n8n.
Defo possible but I u nderstand what you're saying: https://docs.n8n.io/reference/scaling-n8n.html#prerequisites
You can only have one main n8n instance running at a time at the moment, This may change in the future. I would also recommend moving away from MySQL and over to Postgres as we will be fully dropping MySQL support in the future.
As for the original error that is probably an old migration issue and this issue can probably be closed.
Green lit deployment of either 112 or 120 using this helm chart, with a fresh db each time results in:
1) when saving the first flow I get:
2) aftewards, i'm unable to save anything else and get this error:
2nd issue is that: If one attempts to manually or via HPA scale above 1, or statically set the the amount of replicas to 2 in the values.yaml -
any extra containers fail with ReadWriteOnce errors - ie the storage PVC is already attached to the first pod.
Changing this to ReadWriteMany would require volumeMode: Block, when using something like CephRBD
Is scaling not functinoing here?