I created a code chunk in which word embeddings are computed. This is not yet in functions, but we can keep this PR open while I implement that. In a way, word embeddings do the same thing as the topic model, so I'll have to see which one is the most useful and reliable for the final results.
I created a code chunk in which word embeddings are computed. This is not yet in functions, but we can keep this PR open while I implement that. In a way, word embeddings do the same thing as the topic model, so I'll have to see which one is the most useful and reliable for the final results.