cloudml / zen

Zen aims to provide the largest scale and the most efficient machine learning platform on top of Spark, including but not limited to logistic regression, latent dirichilet allocation, factorization machines and DNN.
Apache License 2.0
170 stars 75 forks source link

(LDA): sparse initialization rather than uniformly random initialization #37

Open hucheng opened 9 years ago

hucheng commented 9 years ago

The phenomenon of LDA training is that the first several training is very costly, this is largely due to the uniformly random initialization that the word-topic thus doc-topic is quite dense.

There are two approaches:

  1. sparse initialization that constraints a word to only a part (like 1%) (randomly) of all topics, and for each tokens of that word, randomly sample from those constrained topics rather than all topics.
  2. First use part of corpus (like 1%) to train several iterations to initialize the word-topic distribution, which should be quite sparse than uniformly random initialization.
bhoppi commented 9 years ago

We can split the perplexity into two parts, e.g. word perplexity and doc perplexity. Then we will look into the impact of the different initialization strategies to the two perplexity parts.

hucheng commented 9 years ago

Great. Look forward to the experimental result of both word perplexity and doc perplexity.