In order to have a lighter repository, making it faster to clone and pull each branch, it would be convenient to remove all the data (pictures, tables, etc) from the notebooks repositories. Instead of that, we could have a common data repository which could be used by several courses (DL101, DL201, python classes, etc).
Do the course designers confirm that those data are not updated really often compared to the code ?
The only difficulty would be to call those data from the notebook. It could be necessary to use the full path of Unpack AI github data repository to access them.
Another advantage : avoiding "!git clone" operation at the beginning of the notebooks, which can make the notebooks quite slow to start, even though we just use a small part of that data,
In order to have a lighter repository, making it faster to clone and pull each branch, it would be convenient to remove all the data (pictures, tables, etc) from the notebooks repositories. Instead of that, we could have a common data repository which could be used by several courses (DL101, DL201, python classes, etc).
Do the course designers confirm that those data are not updated really often compared to the code ? The only difficulty would be to call those data from the notebook. It could be necessary to use the full path of Unpack AI github data repository to access them.
Another advantage : avoiding "!git clone" operation at the beginning of the notebooks, which can make the notebooks quite slow to start, even though we just use a small part of that data,