Open frankShih opened 6 years ago
Does it have something to do with my python version? (I'm using miniconda3
Hi
Were you able to find a solution for the unsqueeze error during evaluation?
Hi @aayushee
I modified my code based on https://github.com/yanwii/seq2seq, please take a look (it is a demo of chatbot based on seq2seq model with beam search)
Hey. Need help here. Facing same issue
The problem is when you are initializing your data it is not sorted correctly, however torchtext can do this quite easily for you.
I have done it (with the SST dataset) in the LSTM RNN model in må respo here, look how i initilize the data sets and my forward pass.
Github respo (The project is undergoing so there might be quite a lot of updates and small bugs some places that is fixed in the next couple of days): https://github.com/s124265/NLP-DL-Project
Anyone got the answer for this issue? I have similar issue: Anyone can resolve this issues?
Hi all,
I'm trying to run the seq 2seq model. (seq2seq-translation-batched.ipnb) My environment is python 3.6.4, torch 0.4.0
And I make some modification:
return F.softmax(attn_energies).unsqueeze(1)
toreturn F.softmax(attn_energies, dim=1).unsqueeze(1)
(I cannot run the code without adding dim param)energy = hidden.dot(encoder_output)
//dot function toenergy = hidden.mm(encoder_output.t())
//matmul function (still, cannot run without this change)Then I failed in Putting it all together part following is the log:
Please give me some suggestion, thanks.