cloudml / zen

Zen aims to provide the largest scale and the most efficient machine learning platform on top of Spark, including but not limited to logistic regression, latent dirichilet allocation, factorization machines and DNN.
Apache License 2.0
170 stars 75 forks source link

(LDA): Search complexity in SparseVector (word-topic and doc-topic vector) is log(K). Please consider to use HashVector in Breeze (or OpenAddressHashArray in Breeze) that is with O(1) #28

Open hucheng opened 9 years ago

hucheng commented 9 years ago

During sampleToken, when computing probability of Nkd*(Nkw+beta), given a topic k in Nkd, find the corresponding Nkw of that topic k is O(Kw). If we choose to use HashVector, the complexity can be reduced to O(1), even with a little bit space overhead. Since HashVector in Breeze is not serializable, so please consider to directly use OpenAddressHashArray in Breeze.

hucheng commented 9 years ago

Note that the aggregating (sum) of two HashVectors cost more than sum of two sparseVector. So the key is to avoid the new operation but with in-place add. Please be referred to aggregateByKey.

bhoppi commented 8 years ago

I tried several open address Hashmap implementations used as HashVector backend, but surprisingly they are all slower than SparseVector (Java's HashMap >> Spark's OpenHashMap > Breeze's OpenAddressHashArray > fastutil's int-int map). Maybe because the conflicts happen at the average of log(K) times. But now this issue is perfectly solved using a very easy trick: before we sample all the edges of one term, we transform the term's count vector into DenseVector first. Then finding n_kw will just need O(1).