carlomarxdk / life2vec-light

Basic implementation of the life2vec model with the dummy data.
https://life2vec.dk
MIT License
33 stars 9 forks source link

TypeError: on_train_epoch_end() missing 1 required positional argument: 'output' #2

Closed ddu97 closed 5 months ago

ddu97 commented 5 months ago

Hi! I encountered an issue while trying to run simple_workflow.ipynb.

During the execution of trainer.fit(model=l2v, datamodule=datamodule), I received the following error message: TypeError: on_train_epoch_end() missing 1 required positional argument: 'output'

However, in the TransformerEncoder, I noticed the relevant code snippet: def on_train_epoch_end(self, output):

I suspect this error might be due to the failure to correctly pass the output parameter. I have reviewed the code and ensured that the correct parameter is provided when calling the on_train_epoch_end() method.

Could you please assist me in resolving this issue? If further information or code snippets are required, please feel free to let me know. Thank you!

{ "name": "TypeError", "message": "on_train_epoch_end() missing 1 required positional argument: 'output'", "stack": "--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[38], line 1 ----> 1 trainer.fit(model=l2v, datamodule=datamodule)

File ~/anaconda3/envs/folder/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py:544, in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path) 542 self.state.status = TrainerStatus.RUNNING 543 self.training = True --> 544 call._call_and_handle_interrupt( 545 self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path 546 )

File ~/anaconda3/envs/folder/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py:44, in _call_and_handle_interrupt(trainer, trainer_fn, *args, kwargs) 42 if trainer.strategy.launcher is not None: 43 return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, *kwargs) ---> 44 return trainer_fn(args, kwargs) 46 except _TunerExitException: 47 _call_teardown_hook(trainer)

File ~/anaconda3/envs/folder/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py:580, in Trainer._fit_impl(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path) 573 assert self.state.fn is not None 574 ckpt_path = self._checkpoint_connector._select_ckpt_path( 575 self.state.fn, 576 ckpt_path, 577 model_provided=True, 578 model_connected=self.lightning_module is not None, 579 ) --> 580 self._run(model, ckpt_path=ckpt_path) 582 assert self.state.stopped 583 self.training = False

File ~/anaconda3/envs/folder/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py:989, in Trainer._run(self, model, ckpt_path) 984 self._signal_connector.register_signal_handlers() 986 # ---------------------------- 987 # RUN THE TRAINER 988 # ---------------------------- --> 989 results = self._run_stage() 991 # ---------------------------- 992 # POST-Training CLEAN UP 993 # ---------------------------- 994 log.debug(f\"{self.class.name}: trainer tearing down\")

File ~/anaconda3/envs/folder/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py:1035, in Trainer._run_stage(self) 1033 self._run_sanity_check() 1034 with torch.autograd.set_detect_anomaly(self._detect_anomaly): -> 1035 self.fit_loop.run() 1036 return None 1037 raise RuntimeError(f\"Unexpected state {self.state}\")

File ~/anaconda3/envs/folder/lib/python3.9/site-packages/pytorch_lightning/loops/fit_loop.py:203, in _FitLoop.run(self) 201 self.on_advance_start() 202 self.advance() --> 203 self.on_advance_end() 204 self._restarting = False 205 except StopIteration:

File ~/anaconda3/envs/folder/lib/python3.9/site-packages/pytorch_lightning/loops/fit_loop.py:373, in _FitLoop.on_advance_end(self) 368 # call train epoch end hooks 369 # we always call callback hooks first, but here we need to make an exception for the callbacks that 370 # monitor a metric, otherwise they wouldn't be able to monitor a key logged in 371 # LightningModule.on_train_epoch_end 372 call._call_callback_hooks(trainer, \"on_train_epoch_end\", monitoring_callbacks=False) --> 373 call._call_lightning_module_hook(trainer, \"on_train_epoch_end\") 374 call._call_callback_hooks(trainer, \"on_train_epoch_end\", monitoring_callbacks=True) 376 trainer._logger_connector.on_epoch_end()

File ~/anaconda3/envs/folder/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py:157, in _call_lightning_module_hook(trainer, hook_name, pl_module, *args, *kwargs) 154 pl_module._current_fx_name = hook_name 156 with trainer.profiler.profile(f\"[LightningModule]{pl_module.class.name}.{hook_name}\"): --> 157 output = fn(args, **kwargs) 159 # restore current_fx when nested context 160 pl_module._current_fx_name = prev_fx_name

TypeError: on_train_epoch_end() missing 1 required positional argument: 'output'" }

carlomarxdk commented 5 months ago

Thanks for the feedback Fixed in the v0.1.3