microsoft / LightLDA

Scalable, fast, and lightweight system for large-scale topic modeling
http://www.dmtk.io
MIT License
842 stars 234 forks source link

How to collect loglikelihood from logs lightlda print out? #21

Closed tanglizhe1105 closed 8 years ago

tanglizhe1105 commented 8 years ago

Hi, what is the relation between block and slice? When computing the doc loglikelihood, we just concern slice == 0 in each worker. Is this means we only computing the doc in slice 0, ignore the other slice. Shall we just compute partly doc in this data block?
When computing word loglikelihood, we set block == 0. Is this means we computing all the word in this block, but ignore the other blocks. And When computing normlized loglikelihood, we use TrainerId() == 0 && block == 0 condition. Here also ingnore the other block.

In workers, all slices in every block may be executed loglikelihood under upper condition setting, and print computing loglikelihood.

So, How should me collect the corpus' doc loglikelihood, work loglikelihood and total loglikelihood?

feiga commented 8 years ago

The conditions are just to ensure we only compute likelihood ONCE in one iteration.

  1. Didn't ignore. We only compute the doc likelihood when sampling slice 0 in one iteration, but the computation is over the entire document. Computing doc likelihood only needs to know the doc-topic information. It's unrelated with word-topic-table, we can compute when sampling any slice.
  2. We compute word likelihood when sampling block = 0. This will only compute word likelihood that was contained in block 0. The result sometimes is a approximation of the correct word likelihood. This is because it's possible that the vocabulary of unique word is block 0 didn't always as same as the whole vocabulary. But it should not differ too much. May only lack of some very low frequency word. Computing word likelihood only related with word-topic-table, so if we have the parameter, we can compute.
  3. This is only related with the summary row, n_t. The condition here is only make sure we only compute once in one iteration.
  4. Sorry not clear what you mean by "In workers, all slices in every block may be executed loglikelihood under upper condition setting, and print computing log likelihood."
  5. The whole likelihood is doc + word + normalized. The doc likelihood is sum of all document. Note that in every machine, we only sample part of dataset(say 1000 document) to compute. You can compute whole, but it's time-consuming. If you what to get the whole doc likelihood, the sampling result times a coefficient will give a approximation result. Word likelihood is sum of all words, which may computed in different slices. Just sum the result from one process.
tanglizhe1105 commented 8 years ago

Thank you very much Feiga Sorry my english is not well. I mean the slice is the basic unit of corpus in trainning. Each slice would print loglikelihood logs when trained. When sampling slice 0 in one iteration, will it compute entire documents' doc-likelihood in block? Here we assume there is 1 block in each worker.

Thanks, Lizhe

feiga commented 8 years ago

@tanglizhe1105 Sorry I must miss your message.

Yes, it compute the entire documents' doc-likelihood. See here