These changes introduce a bunch of fixups/refactorings to make the latest PyTorch-Lightning changes a bit cleaner. In short:
Moved the load_loss and load_optimizer functions to a new module (optim) so that they can be imported in the model itself;
Replaced the PyTorch dataloader getter by a proper PyTorch-Lightning Data Module getter (the latter is a better choice with PyTorch lightning in general as it allows more features in the trainer);
Removed the 'TraineeWrapper' object from the train module and replaced it by a proper LightningModule-based model that defines the required step functions --- it also holds the loss function and the optimizer, as preferred by PyTorch-Lightning design philosophy;
Removed the unnecessary model upload for PyTorch in the model getter (PyTorch-Lightning takes care of all that now).
I tested these changes by generating both PyTorch and Keras projects, testing the pre-commit hook, running pytest, and running the local example config.
These changes introduce a bunch of fixups/refactorings to make the latest PyTorch-Lightning changes a bit cleaner. In short:
load_loss
andload_optimizer
functions to a new module (optim
) so that they can be imported in the model itself;I tested these changes by generating both PyTorch and Keras projects, testing the pre-commit hook, running pytest, and running the local example config.