When all bricks would be tested independently on toy datasets. We require to replicate the experiments of the paper of a smaller scale as presented in Annex F. This feature require to prepare the structure of loading of the toy dataset and the preparation of the where the different parts would be required.
What needs to be done ?
[ ] Prepare the dataset loading (CoinRun) : use a symlink for loading the path that need to be set in the arg of the main
[ ] Prepare visualisation for several steps
[ ] Setup a possible multi-gpu dataset loading
[ ] Use data-augmentations (if requiered in the paper)
[ ] Prepare unit-tests (to evaluate the shapes of data)
[ ] Create the train.py file that load the dataset, the models and all the configs
[ ] Prepare the training loop skeleton : main() , build_model(), build_dataset, train(), ...
[ ] Prepare the first config parameters: dataset_path, batch_size, etc.
Feature detail
When all bricks would be tested independently on toy datasets. We require to replicate the experiments of the paper of a smaller scale as presented in Annex F. This feature require to prepare the structure of loading of the toy dataset and the preparation of the where the different parts would be required.
What needs to be done ?
train.py
file that load the dataset, the models and all the configsmain()
,build_model()
,build_dataset
,train()
, ...dataset_path
,batch_size
, etc.