Closed sls1j closed 3 months ago
This issue is currently awaiting triage.
If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
/sig node
/cc
You can achieve the same adding a container that captures the SIGTERM signal on termination and implement that logic
Hmmm, I think I see what you are saying. Add another container within the specs that should have access to the internal volume and be able to monitor for a SIGTERM within itself since the sigterm is part of the pod and should be sent to every container within the pod... That is brilliant. I think I can see how to do that. Thanks!
The new sidecar support should help, too - they will terminate AFTER the main app container.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What would you like to be added?
We have the ability of an initContainer for pods. It would be nice to have postConatiner that would run after the pod has exited successfully.
Why is this needed?
I have a situation where I'm receiving containers from a 3rd party. I can use an initContainer to load files from s3 into the pod's directory so that the 3rd party doesn't have to worry about s3, key distribution etc. The contain then executes and places the results in a specific location. However, currently I have to put those in persistent volume claim and have a separate process monitoring it to push the data into s3. It would be very nice to have a postContainer that runs after the job has completed and has access to the job's directory that could push the files to s3.