fabrahman / Emo-Aware-Storytelling

Code repository for our EMNLP 2020 long paper "Modeling Protagonist Emotions for Emotion-Aware Storytelling" (https://arxiv.org/abs/2010.06822)
MIT License
18 stars 6 forks source link

question about the RunTimeError when runing bash run_emorl.sh #5

Open xinli2008 opened 2 years ago

xinli2008 commented 2 years ago

Thanks for your code and answers for the previous question.I am so sorry to bother you for my new question.I followed the instruction in the ReadME file.when i ran bash run_emorl.sh , and my rl_method is comet, i came across the next question :

Traceback (most recent call last): File "train_rl.py", line 846, in tf.app.run() File "/home/lixin/enter/envs/lx03/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/home/lixin/enter/envs/lx03/lib/python3.6/site-packages/absl/app.py", line 312, in run _run_main(main, args) File "/home/lixin/enter/envs/lx03/lib/python3.6/site-packages/absl/app.py", line 258, in _run_main sys.exit(main(argv)) File "train_rl.py", line 835, in main _train_epoch(sess, epoch==0) File "train_rl.py", line 562, in _train_epoch reward_base = get_reward(predictor, story_base, rets_data['batch']['unique_id'], train_arc_file, method=FLAGS.rl_method) File "/home/lixin/Emo-Aware-Storytelling-master/Reinforcement/rewards_v2.py", line 281, in get_reward comet_prediction = get_comet_prediction(gen_result) File "/home/lixin/Emo-Aware-Storytelling-master/comet_generate.py", line 61, in get_comet_prediction input_event, model, sampler, data_loader, text_encoder, category) File "comet-commonsense/src/interactive/functions.py", line 124, in get_atomic_sequence input_event, category, data_loader, text_encoder) File "comet-commonsense/src/interactive/functions.py", line 158, in set_atomic_inputs XMB[:, :len(prefix)] = torch.LongTensor(prefix) RuntimeError: The expanded size of the tensor (18) must match the existing size (82) at non-singleton dimension 1. Target sizes: [1, 18]. Tensor sizes: [82]

I try to solve the question by using try,except,else,but it does not work well.So i guess whether you change the commonsense transformer in some place , if not ,have you ever encountered the above question?If you are busy now,do you own things,if you have free time ,try to answer me .Meantime,I will try to work it out.

Tips:Sometimes this problem appears early, sometimes it appears late。

xinli2008 commented 2 years ago

Maybe i have come up with a bad solution.when the length of prefix is large ,the program will go wrong.As shown below: 90 [295, 323, 496, 569, 12, 4830, 323, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290] and 79 [606, 1040, 485, 481, 6545, 488, 886, 622, 8353, 1011, 295, 323, 496, 569, 12, 4830, 323, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290, 290] so i change the code: if len(prefix)>17: xxxxxxxxx; Although this method is successful and the program is training, it is not a perfect solution.And why the number is 290? If you have better solution,try to answer me .At last , i am so sorry to bother you ,hope you have a nice day!

xinli2008 commented 2 years ago

I found a bug when i ran run_emorl.sh (reward :comet) in here :https://github.com/fabrahman/Emo-Aware-Storytelling/blob/a8abea10f498c51cdbea09573e0c4b26aac69e82/Reinforcement/rewards_v2.py#L252 ; In fact ,when the the length of story_rl or story_base is 6 or 7, the code is true,But when the length is 5,for example: [['I was a little girl and was excited to play with my friends.', 'We went to the park and played with our friends.', 'My brother and I were so excited to go play with my friends.', 'We went home and were so happy to be done with the day.', '<|endoftext|>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>']] The above error will appear in the code. My solution is to discuss the length of sample_story and then take the value.