kubeflow / training-operator

Distributed ML Training and Fine-Tuning on Kubernetes
https://www.kubeflow.org/docs/components/training
Apache License 2.0
1.57k stars 685 forks source link

Introduce batch/v1 Job with Indexed completion mode #1718

Open tenzen-y opened 1 year ago

tenzen-y commented 1 year ago

/kind discussion

We often implement features similar to batch/v1 Job (e.g., https://github.com/kubeflow/common/pull/196) since the training operator creates blocks of Pod + Service for each rank, not batch/v1 Job + Service once the custom Job resource (e.g., TFJob) is created.

IIUC, training-operator designed like the above since training-operator core architecture is created before the Indexed Job feature and Pod failure policy feature are released.

So I would like to propose the architecture that the training-operator creates batch/v1 Job with Indexed completion mode + Service, not Pod + Service.

Introducing batch/v1 Job eliminates the need to implement and maintain features similar to batch/v1 Job and makes introducing new batch/v1 Job features easy.

/cc @kubeflow/wg-training-leads

johnugeorge commented 1 year ago

Interesting. This would need a major rewrite ?

/cc @gaocegege @zw0610 @terrytangyuan

tenzen-y commented 1 year ago

This would need a major rewrite ?

Probably, yes. Replacing Pod with batch/v1 Indexed Job, we can use Job with Pod-to-Pod Communication logic.

Also, we maybe remove most of the kubeflow/common codes and use batch/v1 Job logic.

For example, we can replace ReplicaSpec.RestartPolicy with PodFailurePolicy:

https://github.com/kubeflow/common/blob/34276e9d2ffa39f5922479bff87dc5ed5ed94cfb/pkg/apis/common/v1/types.go#L79-L83

https://github.com/kubernetes/kubernetes/blob/c9ed04762f94a319d7b1fb718dc345491a32bea6/pkg/apis/batch/types.go#L220-L229

This means replacing that, we don't need to hold the logic to judge whether restart pods.

gaocegege commented 1 year ago

/cc @alculquicondor

tenzen-y commented 1 year ago

IIUC, Indexed Job feature aims tf-operator, mpi-operator and more.

https://github.com/kubernetes/kubernetes/issues/99497#issuecomment-786776056

zw0610 commented 1 year ago

Introducing batch/v1 Job eliminates the need to implement and maintain features similar to batch/v1 Job and makes introducing new batch/v1 Job features easy.

Since it's the major benefits is to reduce the duplicated developing workload regarding some features, do we have such a feature list which are not implemented in training-operator but will be completed once the batch/v1 Job was adopted?

tenzen-y commented 1 year ago

Introducing batch/v1 Job eliminates the need to implement and maintain features similar to batch/v1 Job and makes introducing new batch/v1 Job features easy.

Since it's the major benefits is to reduce the duplicated developing workload regarding some features, do we have such a feature list which are not implemented in training-operator but will be completed once the batch/v1 Job was adopted?

Good point. I don't create such a list yet. At present, I found the suspend Job and pod disruption conditions.

I'll create a list and share it with you in this issue.

alculquicondor commented 1 year ago

+1 This has been discussed before #1303

If you could provide a list of any missing functionality in the Job API, we could add those to the roadmap. We did a lot of progress with failure policies, but IIUC, there's also a need for some form of success policy?

Also @ahg-g is working on a proposal for a multi-pod-template API, that he's going to present in the Batch working group meeting on Feb 2nd https://docs.google.com/document/d/1XOeUN-K0aKmJJNq7H07r74n-mGgSFyiEDQ3ecwsGhec/edit#heading=h.ukbaidczvy3r

ahg-g commented 1 year ago

+1

The benefit is not just deduping code, but also helping to defragment the ecosystem. While I do understand the benefit of having dedicated APIs for MPI, TF training etc., it is important that they build on a common API that we can use for job-level scheduling and autoscaling.

As @alculquicondor mentioned, I am working on a proposal that I will make public next week and will be discussed in Kubernetes batch working group. I am also happy to schedule a time and discuss it with the kubeflow community, can you please let me know how/where I can put this topic on the meeting agenda?

tenzen-y commented 1 year ago

If you could provide a list of any missing functionality in the Job API, we could add those to the roadmap. We did a lot of progress with failure policies, but IIUC, there's also a need for some form of success policy?

@alculquicondor I think introducing the success policy would be useful for training-operator since we hold the logic by ourselves in tensorflow-controller.

https://github.com/kubeflow/training-operator/blob/ddf372c8fe0d179674f3dc5e61fbd71246c32924/pkg/controller.v1/tensorflow/tfjob_controller.go#L505-L534

Maybe, tensorflow-controller or training-controller can be one of the use cases to introduce the success policy to batch/v1 Job.

Also @ahg-g is working on a proposal for a multi-pod-template API, that he's going to present in the Batch working group meeting on Feb 2nd https://docs.google.com/document/d/1XOeUN-K0aKmJJNq7H07r74n-mGgSFyiEDQ3ecwsGhec/edit#heading=h.ukbaidczvy3r

Thanks for sharing. I'm interested a muti-pod-template API since we can consider using the API after we introduce batch/v1 Job to training-operator.

Are there KEPs for a multi-pod-template API in k/enhancement?

tenzen-y commented 1 year ago

The benefit is not just deduping code, but also helping to defragment the ecosystem. While I do understand the benefit of having dedicated APIs for MPI, TF training etc., it is important that they build on a common API that we can use for job-level scheduling and autoscaling.

@ahg-g Yes, exactly. I think so too.

As @alculquicondor mentioned, I am working on a proposal that I will make public next week and will be discussed in Kubernetes batch working group. I am also happy to schedule a time and discuss it with the kubeflow community, can you please let me know how/where I can put this topic on the meeting agenda?

We have bi-weekly community meetings for WG Training, and there is a meeting note in https://docs.google.com/document/d/1MChKfzrKAeFRtYqypFbMXL6ZIc_OgijjkvbqmwRV-64/edit#.

I rarely attend meetings, but you can share a multiple-pod-template API with WG Training leads.

ahg-g commented 1 year ago

Are there KEPs for a multi-pod-template API in k/enhancement?

Not yet, as I mentioned above, I will share a google doc next week, it is easier to discuss such a significant proposal on a google doc first before we move to a KEP. Note that the plan is to initially host the API under the Kueue project to iterate fast on the api with the goal of upstreaming it eventually.

tenzen-y commented 1 year ago

Are there KEPs for a multi-pod-template API in k/enhancement?

Not yet, as I mentioned above, I will share a google doc next week, it is easier to discuss such a significant proposal on a google doc first before we move to a KEP. Note that the plan is to initially host the API under the Kueue project to iterate fast on the api with the goal of upstreaming it eventually.

I see. Thanks for letting me know.

tenzen-y commented 1 year ago

I will work on this issue after the kubeflow v1.7 feature freeze date since that date is coming up. Then, I will share the corresponding table for batch/v1 Job and training-operator Job feature in this issue.

If I find this issue with significant API changes, I will submit a proposal to this repository.

Also, I will work on the actual implementation after #1714 is done.

ahg-g commented 1 year ago

/cc @richardsliu

tenzen-y commented 1 year ago

Maybe, we need to wait for the Indexed Jobs with unset completions parameter feature to support Elastic PytorchJob.

ref: https://github.com/kubernetes/enhancements/issues/3715

alculquicondor commented 1 year ago

Is elastic Pytorch the only training job that supports resizing?

Does it matter which workers get removed?

tenzen-y commented 1 year ago

Is elastic Pytorch the only training job that supports resizing?

IIUC, we are supporting only one master replica for PytorchJob. So yes.

https://github.com/kubeflow/training-operator/blob/b87c6fa0be186ecf8bc901c74cdc52a529ddd536/pkg/apis/kubeflow.org/v1/pytorch_validation.go#L62-L66

Does it matter which workers get removed?

Maybe, It does not matter which worker is deleted since the Elastic PytorchJob uses a local elastic agent.

@gaocegege @zw0610 If I misunderstand the Elastic PytorchJob, can you correct me?

tenzen-y commented 1 year ago

Also, we may use Indexed Jobs with unset completions parameter feature for MPIJob v1 with Horovod.

tenzen-y commented 1 year ago

The Elastic Indexed Job is supposed to graduate to beta in K8s 1.27. So we can work on this once we stop supporting k8s 1.26 (maybe next year?).

alculquicondor commented 1 year ago

I agree

tenzen-y commented 1 year ago

We may be able to introduce the JobSet instead of batch/job, although I think we need to wait for the JobSet API to be in beta.

https://github.com/kubernetes-sigs/jobset

tenzen-y commented 1 year ago

As a first step, migrating to batch/job might be better. After that, we migrate to the JobSet. Since directly migrating to the JobSet has too much influence on the training operator.

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

tenzen-y commented 1 year ago

/lifecycle frozen

tenzen-y commented 4 months ago

/assign We're starting the discussion based on my enhancement proposal.