Closed lolokoko28 closed 2 years ago
The problem is with TrackTestAccuracyCallback
, which calls trainer.test
without passing the datamodule.
Inside the pl trainer, self._data_connector.attach_data
then fails to assign any dataloader to the LightningModule's test_dataloader
because all the loaders as well as the datamodule are None
, so the attribute is silently not overridden, leading to the technically correct but not very informative MisconfigurationException
.
This bug was probably introduced when PL decoupled trainer.fit
and trainer.test
. Previously you didn't have to provide the datamodule in test
if you had done so in an earlier call to trainer.fit
, but now you have to provide it every time. I don't recall which version of PL this was and I couldn't easily find it.
I see two solutions:
TrackTestAccuracyCallback
such that you must provide the datamodule on initialization.
TrackTestAccuracyCallback()
must be replaced with TrackTestAccuracyCallback(datamodule)
everywhere.TrackTestAccuracyCallback
and let users implement it themselves, as it isn't necessarily within scope of learn2learn to provide a way to track test accuracy every epoch.I would personally go with the second one.
Good catch! Thank you! Also after doing second thing of your suggestions I get a different error and switching to 1.0.2 Pytorch-lightining helps but it seems a bit outdated :/
When running the example file: https://github.com/learnables/learn2learn/blob/master/examples/vision/lightning/main.py
I got this error:
Environment: I'm using Anaconda, Python 3.10 on Windows 10 and that's what I get after pip freeze:
Does anyone have an idea what might be causing it?
PS: I actually tried this also on the other script using l2l, MAML, and PyTorch-lightning with a custom dataset and got the same bug