Open kechan opened 6 years ago
I think so that the embedded_text
in Listing 7.1 is wrong, as it's for embedded_question
Additionally, I noticed the following code in Listing 7.2 makes no sense.
# np.zeros((num_samples, answer_vocabulary_size))???
answers = np.random.randint(0, 1,
size=(num_samples, answer_vocabulary_size))
I know that this is just a toy data , but since it would cause confusion it should be more appropriate, like as follows
answers = np.random.randint(answer_vocabulary_size, size=(num_samples))
answers = keras.utils.to_categorical(answers, answer_vocabulary_size)
Yes there is an error in the order of arguments / parameters...
Let me share a modest contribution to the François Chollet's book community. Here on my GitHub code repo will find 4 companion Notebooks for the Chapter 7 «Advanced deep-learning best practices».
Want to cross-check the code here with the book, but couldn't find chapter 7.
Why is
embedded_text = layers.Embedding(64, text_vocabulary_size)(text_input)
should the vocabulary_size be the 1st argument and 64 (embedding dim) be 2nd?