Open stinovlas opened 7 months ago
@stinovlas this is of interest to us as well and I had a discussion going with @s3rius some months ago on it.
Hello,
For worker processes this should be fairly simple, provided that the broker has a result backend.
The scheduler can be an issue, however.
It's common practice to add perform healtchecks inside containers in dockerized environment. If healthcheck fails do pass within defined parameters, docker daemon restarts the container. HTTP / uwsgi process usually allows the healthcheck using
curl
oruwsgi_curl
. The situation is much more chaotic with servers, but it should be possible to provide some CLI to check that worker or scheduler is working properly.Right now, I could add custom job that just returns some sentinel (e.g. fixed string). By kicking that job and checking for result, I could verify that worker process is up and running and processing kicked tasks. Similar check could be performed for scheduler. This is not very convenient, because I have to add a new module with custom healthcheck task to each project deployment (or to the project itself, which doesn't make a lot of sense).
What do you think about adding such healthcheck task to taskiq itself? It could provide new subcommand
taskiq check
ortaskiq healthcheck
(or maybetaskiq status
?) that would kick the job and check its result. The exit code of this CLI tool would then signify the check result (it could also write some useful info tostdout
). Scheduler check could be performed similarly (maybe by scheduling task fornow()
), but I didn't think that through yet.