XiangLi1999 / PrefixTuning

Prefix-Tuning: Optimizing Continuous Prompts for Generation
887 stars 161 forks source link

The version of pytorch_lightning #29

Open Jamesswang opened 2 years ago

Jamesswang commented 2 years ago

Thank you for your open source code. I tried to run your program on the server, but the interface of pytorch_lightning has changed, so I got some errors. May I know the version of pytorch_lightning you and your team use? Thank you!

Looking forward to your reply.

tannonk commented 2 years ago

@Jamesswang, according to the environment.yml on the master branch, you should be right to use pytorch-lightning==0.8.5.

If you setup a clean environment from this file, e.g. conda env create -f environment.yml, you should avoid dependency issues. That said, I had to remove the following two lines when setting up the environment:

Jamesswang commented 2 years ago

@tannonk Thank you for your reply

When I was running the code, I encountered the following problems after an epoch

Traceback (most recent call last): File "finetune.py", line 876, in main(args) File "finetune.py", line 784, in main logger=logger, File "/home/wanghaotian/PrefixTuning/seq2seq/lightning_base.py", line 795, in generic_train trainer.fit(model) File "/home/wanghaotian/miniconda3/envs/prefix-tuning/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1003, in fit results = self.single_gpu_train(model) File "/home/wanghaotian/miniconda3/envs/prefix-tuning/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 186, in single_gpu_train results = self.run_pretrain_routine(model) File "/home/wanghaotian/miniconda3/envs/prefix-tuning/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1213, in run_pretrain_routine self.train() File "/home/wanghaotian/miniconda3/envs/prefix-tuning/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 370, in train self.run_training_epoch() File "/home/wanghaotian/miniconda3/envs/prefix-tuning/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 470, in run_training_epoch self.run_evaluation(test_mode=False) File "/home/wanghaotian/miniconda3/envs/prefix-tuning/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 430, in run_evaluation self.on_validation_end() File "/home/wanghaotian/miniconda3/envs/prefix-tuning/lib/python3.6/site-packages/pytorch_lightning/trainer/callback_hook.py", line 112, in on_validation_end callback.on_validation_end(self, self.get_model()) File "/home/wanghaotian/miniconda3/envs/prefix-tuning/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py", line 12, in wrapped_fn return fn(*args, **kwargs) File "/home/wanghaotian/miniconda3/envs/prefix-tuning/lib/python3.6/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 318, in on_validation_end self._save_model(filepath) TypeError: _save_model() missing 2 required positional arguments: 'trainer' and 'pl_module'

Have you ever encountered such a problem when running code

tannonk commented 2 years ago

Yes, I ran into the same error actually and haven't managed to solve that one yet. I'd open a new issue for that...

XiangLi1999 commented 2 years ago

try pip install pytorch-lightning==0.9.0