We have problems with memory usage with our largest models when doing this. Current hack to get our way round this is that test.py now supports loading an alternative dataset specified by another run settings (one with less augmentations). Then, it's possible to load and make predictions with that dataset.
Re-open this issue if you can come up with a more elegant solution.
We have problems with memory usage with our largest models when doing this. Current hack to get our way round this is that
test.py
now supports loading an alternative dataset specified by another run settings (one with less augmentations). Then, it's possible to load and make predictions with that dataset.Re-open this issue if you can come up with a more elegant solution.