Open MagicDinosaur opened 2 years ago
@MagicDinosaur Hi, thanks for you interest on our work. The issue may be because you do not install the correct environment dependency.
Hello @SivilTaram! Thanks for the wonderful work, I am working on a project to extend your approach and was trying to replicate your work but even I faced the same issue. It had also been asked on StackOverflow where the answer mentioned that the OpenAIGPTLMHeadModel
only returns lm_logits
and not hidden_states
.
I suppose it's because of the incorrect environment setup, but I don't understand why that's the case, I followed the steps mentioned in your readme.
RuntimeError: Scikit-learn requires Python 3.8 or later.
but I installed it separately. pip install pytorch-pretrained-bert==0.6.2
. But I am assuming that this is where the issue may have crept in as the older version of OpenAIGPTLMHeadModel
may have returned both the lm_logits
, and hidden_states
. Do I have to add the path of the transformer library somewhere? Or am I missing anything?
Your help would be really appreciated, as we plan to extend this project, and replicating such a complex model from scratch will take us a very long time.
@parthushah8 oh sorry for the late response! I have updated the README in Nov 2022. You may check it out if you still need it. Thanks!
same issue
Hi! Thank you for your research contribution. So far i have downloaded and unzip the "Trained Model Weights- Original setting", but I got a problem when trying to run your interactive.py file. """Traceback (most recent call last): File "/content/Persona-Dialogue-Generation/interactive.py", line 141, in
interactive(parser.parse_args(print_args=False), print_parser=parser)
File "/content/Persona-Dialogue-Generation/interactive.py", line 121, in interactive
acts[1] = agents[1].act()
File "/content/Persona-Dialogue-Generation/agents/transmitter/transmitter.py", line 901, in act
return self.batch_act([self.observation])[0]
File "/content/Persona-Dialogue-Generation/agents/transmitter/transmitter.py", line 857, in batch_act
self.init_cuda_buffer(batchsize)
File "/content/Persona-Dialogue-Generation/agents/transmitter/transmitter.py", line 842, in init_cuda_buffer
sc = self.model(input_dummy, None, None, output_dummy, None)[1]
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/content/Persona-Dialogue-Generation/agents/transmitter/gpt/model.py", line 91, in forward
lm_logits, hidden_states = self.transformer_module(input_seq, None, dis_seq)
ValueError: not enough values to unpack (expected 2, got 1)"""
Could you take a look at it and instruct me how to run it properly? Because i am pretty new to this area. Thank you so much in advance