Closed paulfd closed 2 years ago
Sorry for the linter, this one is simple to fix though. I don't really understand why the docs and coverage break; maybe some supporting package got upgraded through pytorch-lightning newer version or something?
Thanks for the PR. The coverage fail is normal because you added some code without tests, but for the docs, it's probably a dependency problem, as you mention, do you mind having a look please ?
Thanks @paulfd for the PR !
We'll be able to upgrade to lightning 1.5 and above. But first we'll need to update the recipes for the new Trainer API (strategy, accelerator and devices arguments are different from PL<1.3.0).
I changed the base branch of the PR, because as soon as we merge this, the recipes won't work anymore (because of changes in the Trainer arguments), so I prefer to make a dev branch out of this.
Thanks again, merging this !
output
parameter fromon_train_epoch_end
lr_scheduler_step
to the base modelThis allows also torchmetrics to follow PL, which caused problems when PL was pinned and not torchmetrics.