Closed rzhangpku closed 5 years ago
@rzhangpku Thank you for your interest in our work. I think maybe you can change the batch_size to 40 by append --batch_size 40 or smaller and uncomment line 254.
@rzhangpku You can drop me an email if you still have this problem
As you say, I change the batch_size to 30 by append --batch_size 30 and uncomment line 254 in the train.py, then the problem is solved. Thanks a lot for your quick and kind reply.
Hope you enjoy the experiments and have a good weekend!
Your work "PaperRobot" is very excellent and impressive! Have a nice weekend!
I have same question, but i can not solve it by decreasing batch_size.
@ClaireZTH Maybe you can further decrease batch size or choose a server with large GPU memory
When I run
python train.py --data_path data/pubmed_abstract --model_dp abstract_model/ --gpu 1
I get this error:Here is my GPU infomation:
And before I run
python train.py --data_path data/pubmed_abstract --model_dp abstract_model/ --gpu 1
, the 12196MiB GPU memory is all free. Can you help me? Thank you very much!