Confirm that current pipeline runs the script versions to check for no error
Freeze and/or do not execute very expensive examples (LFR, wilds)
Should we use small, representative datasets that are part of the repo for those?
Motivating issue is the small example for pulling wilds one which has to load a big csv. If nothing else, makes build take a long time - if we keep going, we might end up running against restrictions on filesize/build-time for RTD. Pulling data from outside the repo also seems a little fragile.
**wilds_datasets.ipynb is currently excluded via exclude_patterns in conf.py to avoid this problem. That should be updated once this is fixed.
wilds
one which has to load a big csv. If nothing else, makes build take a long time - if we keep going, we might end up running against restrictions on filesize/build-time for RTD. Pulling data from outside the repo also seems a little fragile.**wilds_datasets.ipynb
is currently excluded viaexclude_patterns
inconf.py
to avoid this problem. That should be updated once this is fixed.