Closed faroit closed 4 years ago
lightning would significantly clean up the training code but when using learning rate scheduling and distributed training code from lightning, users won't be able to reproduce the current pre-trained models.
Since results are reported in publications, we want to reduce confusion, and therefore do not currently plan to retrain the model without major core changes. Hence, I propose to skip this feature until we have a core change in the model
lightning has matured enough to be used to refactor open-unmix.