Open exol-forlife opened 3 years ago
@exol-forlife hi~,did you solve the problem?
@exol-forlife hi~,did you solve the problem?
I couldn't find a solution to it, so I rewrote the code using pytorch rather than pytorch-lightning, but the result was not satisfying :(
@exol-forlife hi~,did you solve the problem?
I couldn't find a solution to it, so I rewrote the code using pytorch rather than pytorch-lightning, but the result was not satisfying :(
Do you mean that the result is not as stated in the paper?
hi, sorry for ignoring the problem before. I think this is because this repo depends on a old version of pytorch-lightning. @exol-forlife could you describe more about your re-implementation problem?
I update the pytorch-lightning to 0.7.3 and still have this problem @Sleepychord
I update the pytorch-lightning to 0.7.3 and still have this problem @Sleepychord
Hi, I have the same question. Did you solve this? @miaomi1994
Same question in pytorch-lightning 0.6.0 or 0.7.3, looks like the version provided in README is wrong. @Sleepychord @dm-thu Please give some help.
modify paramters in class IntrospectorModule(pl.LightningModule) init function and code in below change: def init(self, config): ==> def init(self, hparams) add config = hparams
so do class ReasonerModule
/home/anaconda3/envs/cogltx/lib/python3.7/site-packages/pytorch_lightning/utilities/warnings.py:18: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the
main_loop(config)
File "/data/CogLTX-main/main_loop.py", line 57, in main_loop
trainer.fit(introspector)
File "/home/anaconda3/envs/cogltx/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 695, in fit
self.load_spawn_weights(model)
File "/home/anaconda3/envs/cogltx/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 373, in load_spawn_weights
loaded_model = original_model.class.load_from_checkpoint(path)
File "/home/anaconda3/envs/cogltx/lib/python3.7/site-packages/pytorch_lightning/core/lightning.py", line 1509, in load_from_checkpoint
model = cls._load_model_state(checkpoint, args, **kwargs)
File "/home/anaconda3/envs/cogltx/lib/python3.7/site-packages/pytorch_lightning/core/lightning.py", line 1533, in _load_model_state
f"Checkpoint contains hyperparameters but {cls.name}'s init "
pytorch_lightning.utilities.exceptions.MisconfigurationException: Checkpoint contains hyperparameters but IntrospectorModule's init is missing the argument 'hparams'. Are you loading the correct checkpoint?
num_workers
argumentin the
DataLoader` init to improve performance. warnings.warn(*args, *kwargs) Traceback (most recent call last): File "run_20news.py", line 45, in