Open Grraahaam opened 1 year ago
Interesting, any thoughts on what we should probe for during liveness probe? My thought is we can use exec
to run a command inside of the container
Basically, today the liveness could be what the readiness is, since in this case "live" means : the API/service/server is able to handle requests (receiving a 200 OK response code).
The readiness is often use to check an API's dependencies, like databases, external APIs, internal services. Ready means : I'm alive and I work correctly, as expected.
Suggestions :
/api/status
(or e.g. /api/health/liveness
). The current check + returned status code are fine/api/ready
(or e.g. /api/health/readiness
). It could check the connection to the database (MongoDB) and that's all it needs to work as expected, right?/
. The current check + returned status code are fine/health/readiness
. It could check the connection to the backend and that's all it needs to work as expected, right?Yes, i think these readiness will work!
Now that the front and the back are embedded in a single image, we might still need to check both services within.
I've seen that there's only one readiness configured here, although k8s have different behaviour depending on the probe type, explained here.
Can I add the missing live/readiness probes (front/back)?
Yes, please make a PR for it!
Feature description
Improve k8s pod management by adding consistent probes on both front/backend
Today front/back only have a
readinessProbe
Why would it be useful?
For k8s to be able to detect both live/ready states and apply automatic actions (e.g. restart, detach from related service)
Additional context