According to LDA algorithm, dimension of topic vector (word probabilities) should be K X V. K is number of topics and V is vocabulary size (http://www.jmlr.org/papers/volume3/blei03a/blei03a.pdf). But here dimension is defined as K X W. W is word vector dimension or constant somewhere.
#Number of dimensions in a single word vectorn_units = int(os.getenv('n_units', 300))
Kindly explain, how these dimension can also follow LDA algorithm.
According to LDA algorithm, dimension of topic vector (word probabilities) should be K X V. K is number of topics and V is vocabulary size (http://www.jmlr.org/papers/volume3/blei03a/blei03a.pdf). But here dimension is defined as K X W. W is word vector dimension or constant somewhere.
#Number of dimensions in a single word vector
n_units = int(os.getenv('n_units', 300))
Kindly explain, how these dimension can also follow LDA algorithm.
Thanks in advance!!