Closed Doragd closed 3 years ago
@Doragd I'm glad that my code could be of help to you 😊.
I have unknowingly omitted KL loss' contribution to the final loss. I also forgot to remove the inference
parameter before pushing the final train.py
script. I will fix the errors at the earliest.
Thank you very much for reporting the errors.
thanks for your quick reply. 👍
By the way, i find some other errors. it maybe typos.
linguistic_style_transfer_pytorch/utils/train_w2v.py line 11:
model = Word2Vec(sentences=LineSentence(text_file_path),
min_count=1, size=config.embedding_size)
the config
is not found and should be added into the function parameters or defines the global variable.
and in linguistic_style_transfer_pytorch/utils/vocab.py
line 77: self.vocab_save_path+'word2index.json'))
should be self.vocab_save_path+'/word2index.json'))
Missing backslash~
the same errors are also seen in:
line 81
and line 112
😄 by by by by the way, the repo is missing requirements.txt
?
@Doragd I am really sorry that I couldn't give you a quick reply as I was busy this whole week with exams at my college. All the errors that you have mentioned above seem to arise because I replaced all the hard-coded file path names in the config.py
file using os.path.join(... , ...)
which doesn't append \
, in my final push.
I'll fix all the bugs you have mentioned above right away.
I'm sorry that I forgot to add the requirements.txt
file, I'll add it right away.
I'm really thankful to you for pointing out all the bugs 😃. Feel free to ask any other queries!
@h3lio5 thanks for your help~
by the way, there is another bug when running on the GPU at model.py line 249
.
epsilon = torch.randn(mu.size(1))
should be
epsilon = torch.randn(mu.size(1)).cuda()
@Doragd Fixed it. And please feel free to report any other bugs 😃.
@h3lio5 I have run the code on my server successfully! It's so wonderful!But I cannot find the paper‘s experiment code: Experiment I: Disentangling Latent Space
. Could you have time to reproduce this part?
@Doragd I don't have the trained model weights as the whole project folder has been deleted from the computer in the CS Dept. where I trained the model on. I'll have to retrain the model from scratch but I'm not sure if I will be able to get enough gpu compute time at the lab right now.
@Doragd @h3lio5 I've opened another issue, but it seems like generate.py is working well for you guys. Would it be at all possible to check out the issue I've opened/offer any advice on how to avoid shape errors when running target_tokenids = model.transfer_style(token_ids, target_style_id)
in generate.py? Thanks so much!! :)
@Doragd @h3lio5 I've opened another issue, but it seems like generate.py is working well for you guys. Would it be at all possible to check out the issue I've opened/offer any advice on how to avoid shape errors when running
target_tokenids = model.transfer_style(token_ids, target_style_id)
in generate.py? Thanks so much!! :)
I'm facing the same problem
@michaeldu1 @h3lio5 @arijit1410 about the shape error, you should change the final hidden state shape and concat with the word_emb at every time step. the author seems to forget it.
at transfer_style
function, model.py line 504
:
final_hidden_state = final_hidden_state.transpose(1,0).contiguous().view(embedded_seq.size(0), -1)
at generate sentence
function, model.py line 466
:
for idx in range(mconfig.max_seq_len):
hidden_state = self.decoder(torch.cat([word_emb,latent_emb], axis=1), hidden_state)
Hey everyone, I also faced many many trivial device/shape mismatch mistakes and have corrected almost all of them. I am trying to run generate.py but again there is a shape mismatch in generate_sentences() in model.py. Can you please report of any of you finally managed to get good results from this code? I just hope all my efforts arent in vain.
Hey everyone, I also faced many many trivial device/shape mismatch mistakes and have corrected almost all of them. I am trying to run generate.py but again there is a shape mismatch in generate_sentences() in model.py. Can you please report of any of you finally managed to get good results from this code? I just hope all my efforts arent in vain.
Sorry, I've given up this project and forgotten about any details.
Yeah same. This guy is just padding repos to this GitHub, and this codebase was an absolute train wreck and horribly written.
On Sat, 17 Apr, 2021, 8:53 pm Gordon Lee, @.***> wrote:
Hey everyone, I also faced many many trivial device/shape mismatch mistakes and have corrected almost all of them. I am trying to run generate.py but again there is a shape mismatch in generate_sentences() in model.py. Can you please report of any of you finally managed to get good results from this code? I just hope all my efforts arent in vain.
Sorry, I've given up this project and forgotten about any details.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/h3lio5/linguistic-style-transfer-pytorch/issues/2#issuecomment-821839892, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE5ZI6NC76NFPXR47YLITYLTJGRYHANCNFSM4K6ZZHCQ .
hello, thanks for you code about this paper. it helps me a lot. but from your code at
linguistic_style_transfer_pytorch/model.py line 157: vae_and_classifier_loss
why kl loss is not used?
by the way, at
train.py line 19
This parameter
inference
does not seem to define?