Open woehrl01 opened 10 months ago
Not sure if this is related, but I also found out that after that change of keeping history ( and not having a ttl on jobs), Kueue stopped working and showing an insane amount of admitted workloads (using v0.5.2)
Deleting all the succeeded jobs by hand recovered that.
I guess the admitted workload bug this is fixed by #1654. It would be still nice to remove the workload resource all together.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Are you asking to just delete the Workload but keep the parent Job (or Job CRD)?
@alculquicondor yes. That was the idea. The workload will be deleted eventually, but this would free up etcd storage until ttl of the job crd has been reached.
/retitle Optional garbage collection of finished Workloads
🤔 maybe we can also do this for orphan Workloads #1789
/assign
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen
@kannon92: Reopened this issue.
Opening due to the work seems to be in flight.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen
@woehrl01: Reopened this issue.
/remove-lifecycle rotten
What would you like to be added:
I would like to have an manager option to delete workload resources as soon as (or with a ttl) the scheduled Job is finished.
Why is this needed:
I changed a configuration to retain more history of job executions of a cronjob, and the memory consumption of the kueue-manager more than doubled:
Completion requirements:
This enhancement requires the following artifacts:
The artifacts should be linked in subsequent comments.