Closed Adam-D-Lewis closed 1 year ago
There are a few possible options that I've considered: jupyterhub-ssh, kbatch, or just let whatever solution we come up with https://github.com/Quansight/qhub/issues/1100 and/or https://github.com/Quansight/qhub/issues/1099 handle this as well.
Kbatch and jupyterhub-ssh could both solve this potentially. Kbatch has the advantage that it can run on custom docker images, allowing users access to dependencies that aren't available through conda, but the user does not have access to all the jupyter user's files. Instead, kbatch has a limitation which only allows you to pass in a single file or directory only up to 1 MiB in size (uses ConfigMap under the hood).
Jupyterhub-ssh is simpler, works with no additional dependencies, and allows access to all of the jupyter user's files by default so seems preferred to me. Both Kbatch and jupyterhub-ssh can allow users access to the conda envs in conda-store, and both could run notebooks via papermill. Neither option currently allows the user to choose what instance size they run on, and it's not clear to me whether users could still use dask-gateway with Kbatch (maybe a permissions issue?) since Kbatch runs the job as a separate pod. It's still not clear to me if the ssh-jupyter sessions would be closed after the job was finished when using jupyterhub-ssh, so that maybe something to look into still.
ssh -o User=<username> <qhub-url> -p 8022
and enter token as passwordnohup <my-command> &
The above isn't too complex, but it might be nice to wrap that in a thin CLI tool similar to kbatch's cli tool. I'm not particular on the name, but let's say it's "qrunner" for the sake of this example. The user could pip install qrunner, then do something like
qrunner configure --url="<qhub-url>" --token="<JUPYTERHUB_TOKEN>"
qrunner run <my-command> --output my-command-output.log --conda-env my-env
which simply runs the command prefaced by nohup conda run -n my-env
, and directs the stdout to my-command-output.log
.kbatch configure --kbatch-url="https://url-to-kbatch-server" --token="<JUPYTERHUB_TOKEN>"
kbatch job submit --name=test \
--image="<conda-store-docker-image-url>" \
--command='["papermill", "notebook.ipynb"]' \
--file=notebook.ipynb
2a. write a yaml job file
# my-job.yaml
name: "my-job"
command:
- sh
- script.sh
image: <conda-store-docker-image-url>
code: "script.sh"
2b. then submit it
batch job submit -f my-job.yaml
kbatch job show "<job-id>
kbatch job logs "<pod-id>"
(have to get pod id first)Regardless of what solution we use, I believe the ideal solution would have the following attributes:
Other options to consider:
kedro install
kedro-docker
)It seems like Kedro could technically act as a workflow manager, but it's very data science use case focused, and using it as a general purpose workflow engine would likely require us to shoehorn our needs into their existing structure leading to a bad user experience. I'd see Kedro as useful during data science projects, but not as a general workflow manager.
CLI tool that will launch an environment similar to jupyter user pod via an Argo workflow Can specify simple dependencies (a bit clunky, but works) Seems stagnant for a year (last commit Mar 1, 2021) Can schedule workflows via cron syntax in workflow file Has option to override cpu, memory, nodeSelector, etc. Uses same image as jupyteruser by default, so we'd need to override either:
# workflow.yaml
jobs:
- conda run -n myenv papermill input.ipynb # 1
- conda run -n myenv python train.py softmax 0.5 # 2
- conda run -n myenv python train.py softmax 0.9 # 3
- conda run -n myenv python train.py relu 0.5 # 4
- conda run -n myenv python train.py relu 0.9 # 5
- conda run -n myenv python output.py # 6
# Job index starts at 1.
dags:
- 1 >> 2
- 1 >> 3
- 1 >> 4
- 1 >> 5
- 2 >> 6
- 3 >> 6
- 4 >> 6
- 5 >> 6
then jupyterflow run -f workflow.yaml
I like jupyterflow for it's simplicity. It seems to make some reasonable assumptions (image to use, volumes to mount) which make it easy for users not familiar with Kubernetes to define and run workflows. We could likely add the functionality to either launch in the conda-store image by default or use conda run -n
without the user needing to specify it. We could also add the ability to transfer over env vars to the workflow by default as well. It also supports scheduling of workflows (cron). However, more complex workflows may require a different tool. I'm also not familiar with what the reporting capabilities
of Argo Workflows look like which is the only reporting/observability solution (by default) for this.
Perhaps, creating some way to make similar assumptions, but using a more fully featured tool could also be an option if preferred over juptyerflow.
I played around with Argo Workflow today and got a few sample workflows to run using the argo
CLI. This was fairly trivial once you have a Kubernetes cluster up and running (I was doing so on QHub deployed on Minikube).
Working with Argo Workflow requires an argo-server
running on the cluster (installed from a kubectl apply
command), and then to interact with it, you'll need the aforementioned argo
CLI. Argo does seem to have a argo-helm
repo which might be useful if/when we want to integrate it into QHub.
From skimming the docs for many of the tools listed above, it seems like many of them either require or will play nicely with Argo Worklow.
The gap that exists with Argo Workflow is how to enable users to launch these workflows from JupyterLab. Yason or Jupyterflow might be possible solutions. My main concern around these two tools is that they both seem to be maintained by individuals.
In the same vain as Hera, Argo Workflow seems to have an existing Python SDK.
I'm curious to learn more about the visualizations/reporting in Argo Workflows. I'm also not clear on how authentication/authorization would work. Maybe we don't need to worry about authentication/authorization just yet though.
So it sounds like Argo is a strong contender for the base layer of our integrated workflow solution and then on top of it we could potentially have multiple client side tools leveraging it.
@trallard
Argo is a really versatile orchestrator engine - not only it integrates well with other pipeline/Ml tools but opens up loads of possibilities for CI driven ML workflows. I think is it a good bet in terms of flexibility and extensibility for Qhub and its users
@dharhas @Adam-D-Lewis are we planning to explore more options?
Well, I don't think we need to explore more options per se but the current integrations are not fully complete. i.e.
The above should probably opened as new issues and this can be closed.
Argo-Workflows has been integrated. This can be closed 🎉
Feature description
Currently long running computations require a browser window to be kept open for the duration of the computation. Some prototype work has been done to enable ‘background’ and ‘batch’ processes to be run on QHub.
Value and/or benefit
This feature would enhance this and make it easily accessible to scientists and engineers using the platform.
Anything else?
No response