quanpn90 / NCE_CNNLM

Convolutional Neural Network Language Models
4 stars 0 forks source link

please give an example for how to run a train or sample #1

Open ghost opened 6 years ago

ghost commented 6 years ago

I configure the environment according README.md. And I don't know how to run it to test if I do it correctly. pls

quanpn90 commented 6 years ago

Sorry, its been a while and I moved to a new workplace. Given that Torch is deprecated and outdated w.r.t the CUDA and cudnn versions, I will not recommend running it

But I have modified the README.md to give you the command for ptb.

Thanks for looking at the repo !

On Mon, 20 Aug 2018 at 03:37, UCASJingKun notifications@github.com wrote:

I configure the environment according README.md. And I don't know how to run it to test if I do it correctly. pls

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/quanpn90/NCE_CNNLM/issues/1, or mute the thread https://github.com/notifications/unsubscribe-auth/AObiuQX6l0QAli7V08rES7EN-WlEzyQKks5uShK8gaJpZM4WDOEu .

ghost commented 6 years ago

Thanks. I try the parameter you provided, but I can't get the best result in the paper. Where is the problem? Can u tell me pls?

quanpn90 commented 6 years ago

Thanks for trying the code. How much did you get ? If I remember correctly, we reported our average of 5 runs.

ghost commented 6 years ago

Thanks for your reply. My first run meet a error. I try add a parameter threshold(set 0) to correct it. Then I try it against, and it runs correctly. I had 2 runs around 120 more than 92(+MLPConv+COM, k=128, w=3+5, on Penn Treebank, #p=8M). And I find the number of its parameters is 3M. Are there some tricks to reduce the perplexity during training? Or does I make a mistake?

quanpn90 commented 5 years ago

Thanks for the question.

Probably there's something different to my setup, that the model would need more parameter.

Can you at least check if the context size is 16 (as I can see in the default option it's 16).

May I ask why you need to run this exp, because afaik my result is almost outdated and we didn't prove to be better than an RNN at that moment.

Best regards,

On Thu, 20 Sep 2018 at 11:13, UCASJingKun notifications@github.com wrote:

Thanks for your reply. My first run meet a error. I try add a parameter threshold(set 0) to correct it. Then I try it against, and it runs correctly. I had 2 runs around 120 more than 92(+MLPConv+COM, k=128, w=3+5, on Penn Treebank, #p=8M). And I find the number of its parameters is 3M. Are there some tricks to reduce the perplexity during training? Or does I make a mistake?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/quanpn90/NCE_CNNLM/issues/1#issuecomment-423106005, or mute the thread https://github.com/notifications/unsubscribe-auth/AObiud6wOUraTtX2m9Tynf4n70HdxLrmks5uc1w5gaJpZM4WDOEu .

ghost commented 5 years ago

Thanks. Recently, my boss asked me to do some works for language model, and I want to do it combining CapsNet and CNN. Just try. So, I wonder how the model NCE_CNNLM works to improve it or try my model.

quanpn90 commented 5 years ago

Hi,

Thanks for your attention to my work. However, I would like to recommend the work from Dauphin et al.

https://arxiv.org/pdf/1612.08083.pdf

He used CNN in a smarter way (I have problems with parameters exploding when the context increases) and yielded very good results, and scalable on large dataset. You can find implementation in https://github.com/pytorch/fairseq for better support (newer framework).

Best regards, Quan

On Thu, 20 Sep 2018 at 13:08, UCASJingKun notifications@github.com wrote:

Thanks. Recently, my boss asked me to do some works for language model, and I want to do it combining CapsNet and CNN. Just try. So, I wonder how the model NCE_CNNLM works to improve it or try my model.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/quanpn90/NCE_CNNLM/issues/1#issuecomment-423143530, or mute the thread https://github.com/notifications/unsubscribe-auth/AObiuXeTPMN2zwjbxtQ0JPw3vqibOMWfks5uc3czgaJpZM4WDOEu .

ghost commented 5 years ago

Thanks for your help.