zyxElsa / ProSpect

Official implementation of the paper "ProSpect: Prompt Spectrum for Attribute-Aware Personalization of Diffusion Models"(SIGGRAPH Asia 2023)
Apache License 2.0
136 stars 11 forks source link

No `test_dataloader()` method defined to run `Trainer.test` #13

Open AnkitSinha123 opened 8 months ago

AnkitSinha123 commented 8 months ago

Epoch 1: 50%|▍| 201/404 [01:29<01:30, 2.25it/s, loss=0.116, v_num=0, train/loss_simple_step=0.419, train/loss_vlb_step=0.00246, tra Saving latest checkpoint...

/mnt/data/ankit/ProSpect/main.py(779)() -> if not opt.no_test and not trainer.interrupted: (Pdb) next /mnt/data/ankit/ProSpect/main.py(780)() -> trainer.test(model, data) (Pdb) next pytorch_lightning.utilities.exceptions.MisconfigurationException: No test_dataloader() method defined to run Trainer.test.

Can you please help me with the issues

surfingnirvana commented 8 months ago

I have the same error on finishing of Epoch 5:

Average Peak memory 12120.32MiB Epoch 5: 100%|██████████████████████| 101/101 [01:03<00:00, 1.59it/s, loss=0.0902, v_num=0, train/loss_simple_step=0.0691, train/loss_vlb_step=0.000233, train/loss_step=0.0691, global_step=599.0, train/loss_simple_epoch=0.144, train/loss_vlb_epoch=0.00187, train/loss_epoch=0.144]Epoch 5, global step 599: val/loss_simple_ema was not in top 1 Epoch 5: 100%|██████████████████████| 101/101 [01:04<00:00, 1.58it/s, loss=0.0902, v_num=0, train/loss_simple_step=0.0691, train/loss_vlb_step=0.000233, train/loss_step=0.0691, global_step=599.0, train/loss_simple_epoch=0.144, train/loss_vlb_epoch=0.00187, train/loss_epoch=0.144] Saving latest checkpoint...

Traceback (most recent call last): File "main.py", line 781, in trainer.test(model, data) File "C:\Users\USER1\anaconda3\envs\prospect\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 911, in test return self._call_and_handle_interrupt(self._test_impl, model, dataloaders, ckpt_path, verbose, datamodule) File "C:\Users\USER1\anaconda3\envs\prospect\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 685, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "C:\Users\USER1\anaconda3\envs\prospect\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 954, in _test_impl results = self._run(model, ckpt_path=self.tested_ckpt_path) File "C:\Users\USER1\anaconda3\envs\prospect\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1128, in _run verify_loop_configurations(self) File "C:\Users\USER1\anaconda3\envs\prospect\lib\site-packages\pytorch_lightning\trainer\configuration_validator.py", line 42, in verify_loop_configurations __verify_eval_loop_configuration(trainer, model, "test") File "C:\Users\USER1\anaconda3\envs\prospect\lib\site-packages\pytorch_lightning\trainer\configuration_validator.py", line 186, in __verify_eval_loop_configuration raise MisconfigurationException(f"No {loader_name}() method defined to run Trainer.{trainer_method}.") pytorch_lightning.utilities.exceptions.MisconfigurationException: No test_dataloader() method defined to run Trainer.test.

Somebody has a solution here: https://github.com/Lightning-AI/pytorch-lightning/discussions/11437

you need to pass in the datamodule to trainer.test.

I do not know how to do it.

AnkitSinha123 commented 7 months ago

@surfingnirvana

I found the solution,

In the main file just change

trainer_kwargs["max_steps"] = trainer_opt.max_steps --> trainer_kwargs["max_steps"] = opt.max_steps

It will work

surfingnirvana commented 7 months ago

Thank you it works!