Closed efiop closed 1 year ago
If you have any quick tips here, I would appreciate them. I typically use notebooks for development and inline visualization, and I'm trying to migrate a project to dvc right now — my first dvc project 🎉. I'm thinking it might be best to develop and debug in the notebook as usual, then when I'm ready to run the notebook end-to-end, use e.g. dvc run -d train.ipynb -o training.html -o checkpoint.pt jupyter nbconvert --to html --execute train.ipynb
.
Hi @colllin !
I see by the name of the notebook train.ipynb
, that you are splitting your pipeline into separate steps, that you then plan to run using dvc. That is precisely what we usually recommend! You should be all set :tada: Please don't hesitate to share your experience, we would really appreciate it. :slightly_smiling_face:
Any progress on such example?
@mlisovyi I’ve been using Jupyter in a pipeline with a command like:
jupyter nbconvert Train.ipynb --clear-output --inplace --execute --ExecutePreprocessor.timeout=-1
This executes the notebook and overwrites it in-place, as if I had opened it in Jupyter and ran the entire notebook manually and saved it. I then commit the resulting notebook to git. I also specify some outputs which are cached: a directory for model checkpoints and a directory for logs. For dependencies, I specify the notebook itself as well as a directory of supporting modules.
I believe the initial command to set it up looked something like
dvc run -d Train.ipynb -d src/ -o checkpoints/ -o logs/ jupyter nbconvert Train.ipynb --clear-output --inplace --execute --ExecutePreprocessor.timeout=-1
You might also need to specify a name for the pipeline step somewhere in that command — I used train.dvc
, which I can then execute using dvc repro train.dvc
.
Do we envision this as a regular part of our DVC user guide, or as a blog post? Cc WDYT @flippedcoder thanks
Cc WDYT @flippedcoder thanks
Cc @jendefig
Also cc @dberenbaum — I think we discussed this topic at some point. Do you still have your DVC/Jupyter Notebook examples handy? Thanks
There are a couple different ways to use notebooks with DVC.
The comments above are about running a notebook end-to-end as a DVC stage. I think the examples above give some good ideas about how best to do that.
Another way to integrate DVC and notebooks is to use DVC within the notebook. This could either be running DVC commands, like running experiments/stages from within the notebook, or doing some analysis or otherwise using artifacts or info from an existing DVC project. We plan to work on an experiments API in the future, which will probably be a good point at which to have some notebook examples like this.
I would think a best practices for migration would be good for the docs and blog post. Showing different ways to do it in a series of blog posts?
For me, the ultimate DVC-Jupyter integration (requiring quite some work) would be to provide users with something like custom IPython magic commands in order to generate DVC stuff. Similar to some of the functions that nbdev provides.
This would be in line with the workflow: 1. hacky prototype on notebook -> 2. move to python scripts -> 3. add DVC for reproducibility. These hypothetical DVC magic commands would help to go from 1 to 3 more easily.
For example (roughly speaking and with no details), given a jupyter cell:
EPOCHS = 10
for epoch in range(EPOCHS):
print(epoch)
User would add the magic commands:
%%dvc stage train
%dvc param
EPOCHS = 10
for epoch in range(EPOCHS):
print(epoch)
And the commands would generate something like a python script and updating DVC params/stage:
# train.py
if __name__ == "__main__":
params = yaml.safe_load(params)
EPOCHS = params["train"]["EPOCHS"]
for epoch in range(EPOCHS):
print(epoch)
# dvc.yaml
stages:
train:
cmd: python train.py params.yaml
params:
- train
# params.yaml
train:
EPOCHS: 10
So user went from a Jupyter cell to being able to run dvc exp run -p train.EPOCHS=20
That's a very nice idea @daavoo
I also believe that if there could be some kind of dependency resolution among the Jupyter cells, we could define and run the whole pipeline in a notebook.
%%stage params
EPOCHS=10
and another stage
%%stage train
model.train(epochs=EPOCHS)
Defining a pipeline like,
%%pipeline my-exp
%%depend train param
and running the experiment like
%%exp my-exp
one should be able to mimic most of the pipeline features. Later, it's possible to create DVC-files from these definitions by creating code files, params.yaml
, etc.
Can we move this to https://github.com/iterative/dvc/discussions? We can keep this ticket to document patterns like https://github.com/iterative/dvc.org/issues/96#issuecomment-471198943, but the discussion is now moving towards new feature ideas. Also related to the above suggestions: https://github.com/iterative/dvc/discussions/6011.
@daavoo @iesahin I agree with @dberenbaum the feature suggestions are great but should be in the core repo please 🙂
Is there a recommendation/decision as to writing docs or a blog based on current features? Thanks
Strongly would recommend taking a look at https://github.com/nteract/papermill which integrates quite nicely with DVC :)
Essentially substitute python script.py
with papermill notebook.ipynb
. There are also lots of ways to play around with params, deps & outputs.
Ping @jendefig 🙂 (I think you were looking for ideas, well this is the oldest open ticket one in this repo)
@jorge Thanks! @flippedcoder is finishing up one on this now here. Not sure she is familiar with Papermill @casperdcl. With what you know could you take a look and see if there are any significant advantages over the approach being used now?
Does https://iterative.ai/blog/jupyter-notebook-dvc-pipeline close this? WDYT @dberenbaum @jendefig
Cc @RCdeWit is there an issue for the planned blog follow-up? (Getting to an actual pipeline)
Thanks
Does https://iterative.ai/blog/jupyter-notebook-dvc-pipeline close this? WDYT @dberenbaum @jendefig
Cc @RCdeWit is there an issue for the planned blog follow-up? (Getting to an actual pipeline)
Thanks
@jorgeorpinel I would think that until the follow-up one to the papermill one is done this isn't quite closed. But plans for the next one are already in our backlog, so it won't be lost.
I think we can close this for now. No need to track it here as a separate issue.
@jurasan