Open melezhik opened 3 years ago
We intended cro run
more as a development convenience than an ideal way to run in production (the cro
command line tool is documented as a development tool). Of course, that won't stop anyone... :-)
What I personally do for zero downtime updates is leave the lifting to Kubernetes; essentially, upon a deploy it starts the new container while leaving the old one running, then when the new one is ready it starts to route traffic over to it. (While a readiness probe is the reliable way to do this, I've found that there's already some built-in default delay, and it tends to be long enough for smaller Cro applications anyway.)
Hi Jonathan ! Thanks for a quick response. I get that, but I am looking for less expressive solution, not like kubernetes )))
Let me maybe reshape my question - can I set up a custom ( aka maintenance ) page using cro to notify users that my app is under update right now. I know nginx could do that, but would be nice if I can do this using Raku/cro ...
Thanks
I am looking for less expressive solution, not like kubernetes
Yeah, for some things it's like using a flame thrower to swat a fly... :-)
So far as options:
Cro::HTTP
, which would mean you could write a small service that proxies requests to your application and, upon failure, serves a maintenance page. Presumably this service would only need very occasional updates compared to the larger application, so would help a bit. Not close to zero downtime, but a mitigation. Downside: another service, more latency.cro
runner and script something along the lines of: have a PID file for the current running service, when you update just start a new instance of the service without killing the existing one. When it is ready, but before calling .start
on the Cro::Service
, send SIGINT to the PID of the running version. Wait for it to exit, then start listening and write the PID of the new now-running process into the file. Needs a little care to make it robust. Lots of variations on this theme.Yeah, this cro based customized proxy server would be a good idea, at least for small/medium size projects where performance is not that important ...
We have found haproxy to be very useful for this. It's small, stable and scalable. With its retry feature, you should be able to hold requests until the backend is ready again after a restart (users would just notice a delay but will get served eventually). If you scale up to multiple backends, you just need to add them to your configuration and get failover that way (i.e. you restart one of the backends and the other will get all requests until the first one is ready again). It also supports "backup" servers out of the box, i.e. backends that will only get used if all primaries are down. Such a backup server can be a simple nginx serving a static maintenance page.
Much of this can be achieved with nginx as frontend proxy as well, but it's much harder to get right and they are pushing their commercial nginx plus offering for this use case hard.
Hi guys! First of all thanks for a great product you do.
I use cro for web application (https://mybf.io), my
.cro.yaml
file is that:when I update any file not listed in
ignore
list it take cro awhile to restart an application to pick up changes, during this time my application is not available and my nginx server returns502
error.Any cure for that?
My web app command is:
nohub cro run > cro.log &
Snippet of
app.raku
running a cro web server is:PS I know it could be hard to fix that on cro side, I am just interested in how someone would solve that