kubeflow / mpi-operator

Kubernetes Operator for MPI-based applications (distributed training, HPC, etc.)
https://www.kubeflow.org/docs/components/training/mpi/
Apache License 2.0
417 stars 209 forks source link

ttlSecondsAfterFinished for MPIJob, not only launcher #644

Open hy00nc opened 1 month ago

hy00nc commented 1 month ago

Do we have plan to extend ttlsSecondsAfterFinished to the MPIJob-level, not just the launcher?

alculquicondor commented 1 month ago

do you mean that you want to keep the pod objects until the ttl finishes?

Or do you want to keep them running?

hy00nc commented 1 month ago

@alculquicondor, thanks for the reply. I want the mpijob resource itself to be deleted after ttl, just like how ttlSecondsAfterFinished works in MPIJob V1. In the current implementation, it remains uncleaned until deleted explicitly, right?

alculquicondor commented 1 month ago

oh, gotcha. I don't know if that's how other Kubeflow APIs work. If they do, we can bring MPIJob back to parity.

tenzen-y commented 1 month ago

oh, gotcha. I don't know if that's how other Kubeflow APIs work. If they do, we can bring MPIJob back to parity.

Indeed, the other Jobs will be removed after ttlSecondsAfterFinished like this:

https://github.com/kubeflow/training-operator/blob/be5df91eb43e2fdfa1b0a7005f7aeb8cc3a52fb1/pkg/controller.v1/common/job.go#L428-L435

hy00nc commented 1 month ago

Would it make sense to extend activeDeadlineSeconds and backoffLimit as well? I guess these are also currently limited to launcher, but other kubeflow jobs apply it to the job-level.

alculquicondor commented 1 month ago

Those should be fine just in Job, because the launcher job is what controls the execution. If it finishes as Failed, the rest of the pods would terminate too, IIRC.