Closed Dee61298 closed 1 year ago
I think @anhtu293 & @Data-Iab opinion are on this PR could be great to move on
Very good idea ! @Dee61298 in base lightning module, you shouldnt redefine the flags which are already defined by lightning (https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html#trainer-flags)
@anhtu293 do you mean these lines ? I copied it from another code but I'm guessing it was a mistake
parser.add_argument(
"--gradient_clip_val", type=float, default=0.1, help="Gradient clipping norm (Default: %(default)s)"
)
parser.add_argument(
"--accumulate_grad_batches",
type=int,
default=4,
help="Number of gradient accumulation steps (Default: %(default)s)",
)
Closing this because I made PR #341 without the formatting issues.
Introducing base classes for datamodules and train pipelines (inspired by BaseDataset class). @thibo73800
New feature 1 : BaseDataModule class My motivation for this class is that I kept reusing code solutions from other projects, such as the arguments, the aug/no aug train_transform structure, etc. This created quite a bit of Ctrl+C/Ctrl+V which is undesirable. My view for this class is that in the future, when creating a DataModule for a project, we inherit from the BaseDataModule class and implement only the transforms and the setup. It acts as a wrapper to the Pytorch Lightning Datamodule class, to provide all aloception users with a common code base.
New feature 2 : BaseLightningModule
Same motivation, but for training pipelines. This time, the often-reused bits are the arguments again, the optimizers, the run functions, etc. When inheriting, the user needs to implement the model and the criterion. The user is of course free to write its own functions in the child class for more complex cases
This pull request includes