Open timcosta opened 1 year ago
@szalai1 would any of your prometheus work be relevant here?
I'm thinking we could reuse the /metrics
endpoint as a healthcheck
the /metrics
endpoint will return status code 200
and we can add any values there in the form of metrics. The result on that endpoint is in the form of <metricname> <value>
per line. e.g connection_ok 1
, but it needs to be parsed out of the results if we want to used for health checking, this is complicated. so having a separate /health
endpoint is probably easier if we want the pod to report health for kubernetes.
options I see to kill the pod in such a case:
tl;dr I think we need a /ready
endpoint that returns 200 once connected to kafka + readiness probe set in k8s deployment.
I would agree with the /ready endpoint - sometimes metrics endpoints are "heavy" and not great targets for probes as a result, that's my primary concern with using a metrics endpoint.
Do we need readiness probes not just for connectivity to kafka.. but also for each individual action?
e.g. if connectivity to DataHub Kafka looks good, but Slack action is unhealthy because the credentials don't work, what would the expected behavior be?
Thats a good question... couple of brief thoughts
This issue is stale because it has been open for 30 days with no activity. If you believe this is still an issue on the latest DataHub release please leave a comment with the version that you tested it with. If this is a question/discussion please head to https://slack.datahubproject.io. For feature requests please use https://feature-requests.datahubproject.io
Not stale
Hey all -
Bumping this because we just had an actions container enter an unhealthy state where it appeared to run out of file descriptors (SSLError(OSError(24, 'Too many open files'))
) but the process didnt exit and we had no way to automatically detect that until we noticed that data ingestion was failing.
Hey all! Wondering if y'all have thought about adding a health probe endpoint (or status checker of some sort) to this project that does things like asserts on whether or not healthy connections have been opened to the sources that it's supposed to be listening to.
I'm asking because I'm running into a situation where the container is essentially just looping and printing a message like this:
I'm hoping to find a way to have the container exit in situations like this, whether it be using an external health probe to an http endpoint, or some internal status checking that exits/throws an exception in a situation like this. Any thoughts?