facebookresearch / moe

Misspelling Oblivious Word Embeddings
Other
202 stars 22 forks source link

training code/pretrained word vectors #3

Open wwymak opened 4 years ago

wwymak commented 4 years ago

Are there any plans to open source the training code and/or the word vectors trained with the moe data?

Thanks!

edizel commented 4 years ago

Hello @wwymak :) We are starting the process of open sourcing the source code. Stay tuned!

loretoparisi commented 4 years ago

@edizel amazing work! I'm the author of FastText.js that wraps FastText into a handy Node.js api, can't wait to see moe models as well!

c0nn3r commented 4 years ago

Has there been any update on this?

saippuakauppias commented 4 years ago

I think that there will be no vectors as well as the benefits of this data. There are a lot of duplicates, plus bad examples. Nothing good can be learned from this.

c0nn3r commented 4 years ago

I'm just interested in the model / the training code. I found the examples in the paper impressive - even if I agree the dataset is far from clean.

murali1996 commented 4 years ago

Hi @edizel This is an interesting work! I am currently working on a similar task on word recognition and would like to know if you would be releasing your models & scripts anytime in the coming weeks. Specifically , I'm looking for the test-data (w/ misspellings or the script to create it) and the models to obtain the scores reported in your paper (Table-2)

edizel commented 4 years ago

Hello Everybody! We are not able to open source the model and the test code soon. Sorry about that :(

duongkstn commented 4 years ago

keep going @edizel :-)