This is the Lightning Library - collection of Lightning related notebooks which are pulled back to the main repo as submodule and rendered inside the main documentations. The key features/highlights:
For more details read our blogpost - Best Practices for Publishing PyTorch Lightning Tutorial Notebooks
This repo in main branch contain only python scripts with markdown extensions, and notebooks are generated in special publication branch, so no raw notebooks are accepted as PR. On the other hand we highly recommend creating a notebooks and convert it script with jupytext as
jupytext --set-formats ipynb,py:percent my-notebook.ipynb
The addition has to formed as new folder:
.py
file (name does not matter).meta.yaml
including following info:
title: Sample notebooks
author: [User](contact)
created: YYYY-MM-DD
updated: YYYY-MM-DD
license: CC BY-SA
# multi-line
description: |
This notebook will walk you through ...
# define supported - CPU|GPU|TPU
accelerator:
- CPU
requirements.txt
in the particular folder (in case you need some other packaged then listed the parent folder)It is quite common to use some public or competition's dataset for your example. We facilitate this via defining the data sources in the metafile. There are two basic options, download a file from web or pul Kaggle dataset [Experimental]:
datasets:
web:
- https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
kaggle:
- titanic # this need to be public dataset
In both cases, the downloaded archive (Kaggle dataset is originally downloaded as zip file) is extracted to the default dataset folder under sub-folder with the same name as the downloaded file.
To get path to this dataset folder, please use environment variable PATH_DATASETS
, so in your script use:
import os
data_path = os.environ.get("PATH_DATASETS", "_datasets")
path_titanic = os.path.join(data_path, "titanic")
Warning: some Kaggle datasets can be quite large and the process is - downloading and extracting, which means that particular runner needs to have double free space. For this reason, the CPU runner is limited to 3GB datasets.
![Cation](my-image.png){height="60px" width="240px"}
On the back side of publishing workflow you can find in principle these three steps
# 1) convert script to notebooks
jupytext --set-formats ipynb,py:percent notebook.py
# 2) [OPTIONAL] testing the created notebook
pytest -v notebook.ipynb --nbval
# 3) generating notebooks outputs
papermill in-notebook.ipynb out-notebook.ipynb
You may want to build the documentation local without need to excrete all notebooks. In such case you can convert all scripts to ipython notebooks as dry run...
# set skip notebooks execution, just conversion
export DRY_RUN=1
# generate notebooks from scripts
make ipynb
# build the documentation
make docs