ChengyueGongR / advsoft

Language Model Baselines for PyTorch
42 stars 4 forks source link

reproducing language modeling results #3

Open tonytan48 opened 4 years ago

tonytan48 commented 4 years ago

Hi Mr.Gong, congradulate on your work on new language modeling technique. I was trying to reproduce your experiment but some error of the dictionary occured: image If I substitute the data.py from original mos repo, like "import data_mos as data" the following error occurs: image

May I ask for the solution for this? I am not sure is the problem of tokenization part "data.py" or the embedding_regularization part.

Best, Qingyu Tan

tonytan48 commented 4 years ago

Thank you for the prompt reply.

After pulling the latest branch with the updated "data.py". I still have an error of : Traceback (most recent call last): File "main.py", line 285, in train() File "main.py", line 228, in train log_prob, hidden[s_id], rnn_hs, dropped_rnn_hs = parallel_model(cur_data, hidden[s_id], return_h=True, targets=cur_targets, is_switch=is_switch) File "/home/qingyu.tan/miniconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/qingyu.tan/projects/advsoft/mos-awd-lstm-lm/model.py", line 83, in forward emb, sigma = embedded_dropout(self.encoder, torch.ones_like(self.encoder.weight), input, dropout=self.dropoute if (self.training and self.use_dropout) else 0, is_training=self.training) File "/home/qingyu.tan/projects/advsoft/mos-awd-lstm-lm/embed_regularize.py", line 34, in embedded_dropout X = embed._backend.Embedding.apply(words, masked_embed_weight, File "/home/qingyu.tan/miniconda3/envs/py36/lib/python3.6/site-packages/torch/nn/backends/backend.py", line 10, in getattr raise NotImplementedError

May I know can you reproduce the error? My torch version is 0.4.1.post2