Hi,
thank you for your interest in my work! I appreciate it.
I can't remember or find any reference to "word_embeds_restaurants_ote.txt".
Where did you see this? The word embeddings are trained using the Skip-Gram implementation from the Gensim library with negative sampling. I used the same embeddings as in this earlier work of mine.
I extracted the vector representations for the word-level embeddings and character-level embeddings in "analyze_trained_model.py". After that I applied T-SNE from scikit-learn to the vectors to obtain the two dimensional vectors. The results are exported to a bunch of files. The actual visualization happens in "plot_suffixes.py" which I now included in the repository.
I hope this helps a bit. Tell me if there is anything else you need to know or that is unclear.
Edit: You may want to check out the newest version
Hi,
Really the works are great!!
I have some questions.
Please give me your replies, this will help really my NLP project and cite your workings.
Thanks.