nateraw / Lda2vec-Tensorflow

Tensorflow 1.5 implementation of Chris Moody's Lda2vec, adapted from @meereeum
MIT License
107 stars 40 forks source link

ValueError: too many values to unpack (expected 19) #39

Closed dbl001 closed 5 years ago

dbl001 commented 5 years ago

I get this error when using the larger glove file:

embedding_matrix = P.load_glove("glove.840B.300d.txt")

restore=True pretrained_embeddings=embedding_matrix

m = model(num_docs, vocab_size, num_topics=num_topics, embedding_size=embed_size, restore=True, logdir="/data/logdir_190318_1739_190320_0734", pretrained_embeddings=embedding_matrix, freqs=freqs)


ValueError Traceback (most recent call last)

in () 25 logdir="/data/logdir_190318_1739_190320_0734", 26 pretrained_embeddings=embedding_matrix, ---> 27 freqs=freqs) 28 29 m.train(pivot_ids,target_ids,doc_ids, len(pivot_ids), num_epochs, idx_to_word=idx_to_word, switch_loss_epoch=5) ~/Lda2vec-Tensorflow/lda2vec/Lda2vec.py in __init__(self, num_unique_documents, vocab_size, num_topics, freqs, save_graph_def, embedding_size, num_sampled, learning_rate, lmbda, alpha, power, batch_size, logdir, restore, fixed_words, factors_in, pretrained_embeddings) 109 self.fraction, self.loss_lda, self.loss, self.loss_avgs_op, 110 self.optimizer, self.merged, embedding, nce_weights, nce_biases, --> 111 doc_embedding, topic_embedding) = handles 112 113 self.w_embed = W.Word_Embedding(self.embedding_size, self.vocab_size, self.num_sampled, ValueError: too many values to unpack (expected 19) ![Screen Shot 2019-03-25 at 7 58 57 AM](https://user-images.githubusercontent.com/3105499/54930132-f3641800-4ed3-11e9-9383-d47682ebf4b8.png)
nateraw commented 5 years ago

This one makes sense! Ugh the restore stuff is a headache in its current setup. Should be a quick fix tonight. Thank you for bringing to my attention @dbl001

dbl001 commented 5 years ago

This appears to have been fixed.

nateraw commented 5 years ago

Cool, closing then.