dunglas / mercure

🪽 An open, easy, fast, reliable and battery-efficient solution for real-time communications
https://mercure.rocks
GNU Affero General Public License v3.0
3.95k stars 292 forks source link

helm - Unable to restart deployment when using shared PVC and boltdp transport (?) #953

Closed Fabccc closed 1 month ago

Fabccc commented 1 month ago

Issue

When restarting the deployment, the new pod can't start and the logs are the following:

{"level":"info","ts":1726495525.6821456,"msg":"using config from file","file":"/etc/caddy/Caddyfile"}
{"level":"info","ts":1726495525.6832979,"msg":"adapted config to JSON","adapter":"caddyfile"}
{"level":"warn","ts":1726495525.6833138,"msg":"Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":6}
{"level":"info","ts":1726495525.6841474,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"warn","ts":1726495525.6842918,"logger":"http.auto_https","msg":"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server","server_name":"srv0","http_port":80}
{"level":"info","ts":1726495525.6843634,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0008c8d00"}
{"level":"info","ts":1726495526.6436424,"logger":"tls.cache.maintenance","msg":"stopped background certificate maintenance","cache":"0xc0008c8d00"}
Error: loading initial config:
loading new config:
loading http app module:
provision http:
server srv0:
setting up route handlers:
route 4: loading handler modules:
position 1: loading module 'mercure':
provision http.handlers.mercure: "bolt:///data/mercure.db?subscriptions=1":
invalid transport: timeout

Is it because boltdb can't access a locked database ?

Workaround

Scale down the deployment to zero, and scale back to 1 for changes to apply.

Lasting solution

Be able to choose the kind of K8S ressources (Deployment or StatefulSet) in the values.yaml

dunglas commented 1 month ago

I don't think that there is a solution for that. Bolt doesn't support having multiple processes accessing the same DB. Using the local transport (no storage) or HA transport (included in the paid version) doesn't have this problem.

Fabccc commented 1 month ago

So going for no storage on the single deployment should resolve that. I'm going to try it tomorrow

Fabccc commented 1 month ago

Setting transport to local://local solved the problem. thank you !