Open Jeffwan opened 4 years ago
What do you think about having generic launcher components that receive resolved serialized TaskSpec (or container image + command-line) and launch the given component.
What do you think about syntax like this?
MyLauncher = load_component(...)
with dsl.use_launcher(MyLauncher(num_workers=10)):
launched_task = XGBoostTrainer(training_data=..., num_trees=500)
or
MyLauncher = load_component(...)
launcher_for_train = MyLauncher(
num_workers=10,
task=XGBoostTrainer(training_data=..., num_trees=500),
)
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not had recent activity. Please comment "/reopen" to reopen it.
/reopen
@Jeffwan: Reopened this issue.
Hi @Jeffwan and @Ark-kun . I would like to contribute to this issue. Please let me know how I can be of any help. :)
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Any update to this feature?
I believe it would be great that Kuebflow pipelien can provide a generic launcher that creates CRD and manages the lifespan of a CRD, like MPIJob, PyTorchJobs, etc.
This requirement can be partially satisfied by using Kitlab Expeirment. However, as far as I know, there are some clear drawback of this approach:
Thus, it is desirable to have a GenericLauncher in Kubeflow Pipeline, and an operator to manage the life span of the launcher pod and the created CRDs.
Hi, I am also looking for this feature, especially Pytorch, the PR for it seems pausing for some time https://github.com/kubeflow/pipelines/pull/5170
I could run the distributed training using PytorchJob (created by ResourceOp), this way has a disadvantage that it does not show the logs in the UI of the pipeline, it only shows the logs of the job controller not the worker container.
@ca-scribner please help continue the PR, thanks a lot.
@jalola Thanks for the info. Do you mind to share an example on how to define a PytorchJob with the help of ResourceOp? Thanks in advance.
@wangli1426 The simple example of ResourceOp: https://github.com/kubeflow/pipelines/blob/master/samples/core/resource_ops/resource_ops.py
For the PytorchJob: https://github.com/kubeflow/pytorch-operator/blob/master/examples/mnist/v1/pytorch_job_mnist_nccl.yaml You can make it as json code
Remember to set on_success_condition, example:
success_condition='status.replicaStatuses.Worker.succeeded==3,status.replicaStatuses.Chief.succeeded==1'
https://github.com/kubeflow/pipelines/blob/master/samples/contrib/e2e-mnist/mnist-pipeline.ipynb
Hi @jalola . Just wondering how can we stream all worker logs(when no of workers > 1) into pipeline log console? Or were you looking for just the logs of chief? Do you have any idea in mind?
I only know they have the client sdk to get logs Example: https://github.com/kubeflow/pytorch-operator/blob/4aeb6503162465766476519339d3285f75ffe03e/sdk/python/examples/kubeflow-pytorchjob-sdk.ipynb
But I don't know how to show the logs to a component of pipeline.
Sorry I let this slip from my mind and now I don’t have a good way to test. The changes requested were minor though and the code in the PR is working still if that helps. Maybe you could finish it off
On Fri, Jun 25, 2021 at 04:10 Hung Nguyen @.***> wrote:
I only know they have the client sdk to get logs Example:
But I don't know how to show the logs to a component of pipeline.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubeflow/pipelines/issues/3445#issuecomment-868309454, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALPFPIZO2FDXMZNOOKW2P2LTUQ2XFANCNFSM4MBNOYRQ .
But I don't know how to show the logs to a component of pipeline.
You could just print them.
But I don't know how to show the logs to a component of pipeline.
You could just print them.
I am using k8s_client API (Watch and read_namespaced_pod_log) to stream the logs from training pod. This one works. PyTorchJobClient get_logs(follow=True) does not stream line by line of the logs but the whole logs (when the training finishes).
@Ark-kun Another trouble I find when using launch_crd is that: on the Kubeflow pipeline, if users "terminate" the run of the pipeline, only the training controller pod (which is the launch_crd) is deleted, the distributed training pod will continue running. What do you think? You can give some advise, I may implement to the https://github.com/kubeflow/pipelines/pull/5170
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hi everyone, I'm quite interested in this as well. Is there any progress towards built-in support for distributed training jobs in pipelines?
Is this still in the roadmap?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
In order to leverage different training operators in kubeflow pipeline, it would be better to provide high level launcher components as an abstraction to invoke training jobs.
katib-launcher
andlauncher
are launcher componets for katib and tf-operator. We definitely need more similar components for PyTorch, MxNet, MPI and XGBoost, etc.https://github.com/kubeflow/pipelines/tree/master/components/kubeflow