Open masonmenges opened 1 year ago
I've posted to the slack group about this too, but this is not exclusive to Kubernetes, I have stock standard tasks being sent to a dask cluster, and when the parent flow Crashes for any reason, the slots aren't released.
I wonder if this is the same issue as reported over in https://github.com/PrefectHQ/prefect/issues/5995
Thanks @Samreay — is this helped by https://github.com/PrefectHQ/prefect/pull/8408 ?
Hmmm, if I've understood the merge, then potentially, though it would be good to have that cli endpoint invoked by prefect. I can see the reset method seems to be available in https://docs.prefect.io/api-ref/prefect/cli/concurrency_limit/, so I could add a flow that runs every few minutes which simply calls reset on all active limits Ive got defined.
That said, does that reset endpoint clear all slots, or just zombie slots? It looks like the slot override would end up being none, and so it would remove even valid still running tasks from the slot, right?
I've the same problem with tasks run with the concurrent runner (the defaut runner). They become zombie, probably because they use dask for the xarray math and some deadlock occurs in dask. The problem is that my tasks have a timeout of 1000s, but still can be stuck for much longer times (hours).
@task(tags=["memory_intensive"], retries=2, retry_delay_seconds=400, timeout_seconds=1000)
The fact that the task timeout does not work well is a main problem, but in any case concurrency_limit should be able to release the long-running tasks. Concurrency_limit could use the task timeout and release them, even if still running. Or a specific timeout in concurrency_limit.
Currently this has a big impact in my work, I'm under-using a cluster because I've often 50% of the tasks stuck every ~10h of processing. I've to reset the concurrency_limit, which is a problem because it also release the effectively running (50%) tasks. This takes more memory and make my all system more instable.
I have similar issue on task concurrency with kubernetes as well. mentioned this in slack I also notice this in a task.map() runs. loop runs that go item by item does not have this issue.
I am facing the same issue, but not on Kubernetes, it's a local server running in a docker compose stack. Tasks are stuck in the Running
state far longer than the set timeout value (the timeout is 10 minutes, but often tasks are "running" for 8+ hours).
prefect.runner - Flow run limit reached; 8 flow runs in progress. You can control this limit by passing a `limit` value to `serve` or adjusting the PREFECT_RUNNER_PROCESS_LIMIT setting
First check
Bug summary
When running a prefect flow as a kubernetes job if the flow run is cancelled while tasks are in a running state the concurrency slots used by the tasks are not released though the tasks are in a cancelled state.
This is reproducible via the following steps with the code below with a flow run triggered as a kuberenetes job
KubernetesJob Config:
potentially related but separate issue: https://github.com/PrefectHQ/prefect/issues/7732
Reproduction
Error
No response
Versions
Additional context
Cluster config, minus any sensitive information