kubernetes / kubernetes

Production-Grade Container Scheduling and Management
https://kubernetes.io
Apache License 2.0
110.8k stars 39.6k forks source link

postContainer - something to run after container success exit #122242

Closed sls1j closed 3 months ago

sls1j commented 10 months ago

What would you like to be added?

We have the ability of an initContainer for pods. It would be nice to have postConatiner that would run after the pod has exited successfully.

kind: Job
metadata:
  name: big-job
spec:
  template:
    spec:
      initContainers:
      - image: 192.168.4.120:5000/s3-pull:v1.000
        command:
        - "dotnet"
        - "s3Pull.dll"
        - "http://192.168.4.120:4566"
        - "/data/input"
        name: s3-pull
        volumeMounts:
        - name: shared-dir
          mountPath: /data
      containers:
      - name: big-job
        image: 192.168.4.120:5000/job-test:v1.001
        command: ["dotnet",  "jobTest.dll"]
        volumeMounts:
        - name: shared-dir
          mountPath: /data
      postContainers:
      - image: 192.168.4.120:5000/s3-push:v1.000
        command:
        - "dotnet"
        - "s3Push.dll"
        - "http://192.168.4.120:4566"
        - "/data/output"
        name: s3-push
        volumeMounts:
        - name: shared-dir
          mountPath: /data
      volumes:
      - name: shared-dir
        emptyDir: {}
      restartPolicy: Never
  backoffLimit: 4

Why is this needed?

I have a situation where I'm receiving containers from a 3rd party. I can use an initContainer to load files from s3 into the pod's directory so that the 3rd party doesn't have to worry about s3, key distribution etc. The contain then executes and places the results in a specific location. However, currently I have to put those in persistent volume claim and have a separate process monitoring it to push the data into s3. It would be very nice to have a postContainer that runs after the job has completed and has access to the job's directory that could push the files to s3.

k8s-ci-robot commented 10 months ago

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
HirazawaUi commented 10 months ago

/sig node

AxeZhan commented 10 months ago

/cc

aojea commented 10 months ago

You can achieve the same adding a container that captures the SIGTERM signal on termination and implement that logic

sls1j commented 10 months ago

Hmmm, I think I see what you are saying. Add another container within the specs that should have access to the internal volume and be able to monitor for a SIGTERM within itself since the sigterm is part of the pod and should be sent to every container within the pod... That is brilliant. I think I can see how to do that. Thanks!

thockin commented 8 months ago

The new sidecar support should help, too - they will terminate AFTER the main app container.

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 4 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 3 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/kubernetes/issues/122242#issuecomment-2226000880): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.