ericxsun / fastText

Library for fast text representation and classification.
Other
15 stars 9 forks source link

Memory error while loading wiki-news model for incremental learning #4

Open vdpappu opened 6 years ago

vdpappu commented 6 years ago

Hi Eric, Thanks for the excellent enhancement. I am trying to use your repo for incremental learning. I am getting a memory error while running the script. My machine has 32gb ram and I am able to load the pre-trained model otherwise for inference tasks.

image

Pre-trained model size: 6.8gb Command executed: ./fasttext skipgram -input /home/aaa/Downloads/datasets/nlu/sed_sof_corpus.txt -inputModel /home/aaa/Downloads/datasets/wiki-news-300d-1M-subword.bin -output sed_sof_trlearn -incr

ericxsun commented 6 years ago

that's weird. I have not met such error, even my dataset is larger than your. I'll check it.

You could try to add more info to locate the line of code which caused such error. Any effort will be appreciated.

vdpappu commented 6 years ago

Sure. Will try that out and share the details.

On Mon, 20 Aug 2018, 08:57 Eric x. sun, notifications@github.com wrote:

that's weird. I have not met such error, even my dataset is larger than your. I'll check it.

You could try to add more info to locate the line of code which caused such error. Any effort will be appreciated.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/ericxsun/fastText/issues/4#issuecomment-414187730, or mute the thread https://github.com/notifications/unsubscribe-auth/APX_XxzD7aD8PWaTW3RcGeh0trgOV7b0ks5uSiyxgaJpZM4V8DmV .

vdpappu commented 6 years ago

I am able to run the same with in a Mac. May be some issue with my old machine. However, its not taking the set learning rate. It always shows 0.0000.

image