ray-project / kuberay

A toolkit to run Ray applications on Kubernetes
Apache License 2.0
963 stars 328 forks source link

[RayJob]: Best way to upload a local working directory to a Ray cluster using RayJob #2200

Open aybidi opened 1 week ago

aybidi commented 1 week ago

In our ML platform, we want to enable users to run their jobs using our CLI. On the backend, we want to leverage the RayJob manifest to create the Ray cluster for them and then run their job on it.

What's the best way to upload a local working directory (in our case, our user's local working environment) to the Ray cluster created by the RayJob manifest? We already use Ray's JobSubmissionClient and using it, we can do so by setting the runtime_env's working_dir. How do we achieve something similar with the RayJob manifest?

andrewsykim commented 1 week ago

RayJob works best for running ephemeral clusters that are cleaned up after running a single job.

However, if you want to run additional jobs on the same RayCluster used for a RayJob, you can just use Ray CLI to submit new jobs:

ray job submit --address <address of the RayCluster from RayJob> --working-dir . -- python myjob.py
andrewsykim commented 1 week ago

In our ML platform, we want to enable users to run their jobs using our CLI. On the backend, we want to leverage the RayJob manifest to create the Ray cluster for them and then run their job on it.

@aybidi if you're users are looking to submit adhoc jobs to a cluster, I think you're better off creating RayClusters for them instead of using RayJob.

aybidi commented 1 week ago

Our use case is also ephemeral. We want the users to run a command like $ <our-cli> submit job and, in the backend, (i) create a cluster, (ii) run the job, and (iii) delete the cluster.

Right now, we achieve it using RayCluster manifest (to create the cluster) and JobSubmissionClient to submit the job to it. We want to, however, replace the JobSubmissionClient and switch to using the RayJob manifest.

aybidi commented 1 week ago

Also please let me know if you need more context -- happy to discuss it

andrewsykim commented 1 week ago

Right now, we achieve it using RayCluster manifest (to create the cluster) and JobSubmissionClient to submit the job to it. We want to, however, replace the JobSubmissionClient and switch to using the RayJob manifest.

Ah, gotcha -- sorry for misunderstanding.

It sounds like you just need to convert the $ <our-cli> submit job call into a RayJob. The rayClusterSpec field would be the same as the existing RayCluster you're using. You'll need to update the RayJob entrypoint field to whatever the user specified in $ <our-cli> submit job. Hope that helps answer your question

aybidi commented 1 week ago

Thanks for the reply! We have figured out that bit (creating a cluster and setting the entrypoint).

We're struggling to understand how users can upload their local working dir to the Ray cluster created. Using JobSubmissionClient, the working_dir in the runtime_env refers to the local directory and gets uploaded to the head node. However, when we use RayJob manifest with the runtimeEnvYAML like so:

apiVersion: ray.io/v1
kind: RayJob
metadata:
  name: rayjob-sample
spec:
  entrypoint: python /home/ray/samples/sample_code.py
  runtimeEnvYAML: |
    working_dir: ...

it expects the working_dir to be an existing directory in the cluster (head node's main container).

andrewsykim commented 1 week ago

it expects the working_dir to be an existing directory in the cluster (head node's main container).

I don't think this is supported. RayJob assumes the source code is available inside the container. It can reference remote zip files though (example), so maybe you can add some logic in your CLI to upload the local working directory and reference it in the RayJob? @kevin85421 do you know if there's any other workaround?

aybidi commented 1 week ago

providing some more context:

So if a user has a bunch of scripts in their local dev environment:

configs/
    |---- custom-config.yaml
src/
    |---- train.py
    |---- process.py
    |---- eval.py
    |---- main.py

and they want to run $ <our-cli> submit job --config configs/custom-config.yaml -- python src/main.py, we want to prepare the RayJob CR for them using the custom-config.yaml and then submit the RayJob CR. The issue here is that the src/ files may not exist on the Ray cluster that will be created. This wasn't an issue using JobSubmissionClient because the client uploads the src/ to the head node using an HTTP request.

aybidi commented 1 week ago

Online examples in your samples suggest creating a ConfigMap object to store the scripts and then mounting it onto the container. We can create multiple key-value pairs (one for each file name and its content) in a single ConfigMap.

andrewsykim commented 1 week ago

Online examples in your samples suggest creating a ConfigMap object to store the scripts and then mounting it onto the container. We can create multiple key-value pairs (one for each file name and its content) in a single ConfigMap.

The ConfigMap was just used as an example but I don't think we would recommend this for real world use-cases (@kevin85421 correct me if I'm wrong). As soon as you need to reference multiple files / dependencies it wouldn't scale well.

I think your best bet is to add logic in your internal tool to upload remote zip of the working directory or upload a new container image that includes your local change.

HCharlie commented 5 days ago

I see in the rayjob entrypoint, user could clone a project and run it. But I am not sure if there's GitHub Enterprise integration with Rayjob to make Rayjob easier to use. @andrewsykim , do you know if it's supported?

kevin85421 commented 23 hours ago

Hi @aybidi, @andrewsykim and I will sync next week to discuss this issue. I will keep you updated.