Open vdpappu opened 6 years ago
that's weird. I have not met such error, even my dataset is larger than your. I'll check it.
You could try to add more info to locate the line of code which caused such error. Any effort will be appreciated.
Sure. Will try that out and share the details.
On Mon, 20 Aug 2018, 08:57 Eric x. sun, notifications@github.com wrote:
that's weird. I have not met such error, even my dataset is larger than your. I'll check it.
You could try to add more info to locate the line of code which caused such error. Any effort will be appreciated.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/ericxsun/fastText/issues/4#issuecomment-414187730, or mute the thread https://github.com/notifications/unsubscribe-auth/APX_XxzD7aD8PWaTW3RcGeh0trgOV7b0ks5uSiyxgaJpZM4V8DmV .
I am able to run the same with in a Mac. May be some issue with my old machine. However, its not taking the set learning rate. It always shows 0.0000.
Hi Eric, Thanks for the excellent enhancement. I am trying to use your repo for incremental learning. I am getting a memory error while running the script. My machine has 32gb ram and I am able to load the pre-trained model otherwise for inference tasks.
Pre-trained model size: 6.8gb Command executed: ./fasttext skipgram -input /home/aaa/Downloads/datasets/nlu/sed_sof_corpus.txt -inputModel /home/aaa/Downloads/datasets/wiki-news-300d-1M-subword.bin -output sed_sof_trlearn -incr