Closed atw1020 closed 3 years ago
Thank you very much for reporting this issue. We will investigate and report back.
After updating to the 0.1alpha-2 release I get a different error
NotImplementedError: Cannot convert a symbolic Tensor (gru/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
I can still replicate the issue on my 0.1alpha-1 release though
After updating to the 0.1alpha-2 release I get a different error
NotImplementedError: Cannot convert a symbolic Tensor (gru/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
I can still replicate the issue on my 0.1alpha-1 release though
I'm getting the same error when trying to implement LSTM layer (using 0.1alpha-2 release).
NotImplementedError: Cannot convert a symbolic Tensor (bidirectional/forward_lstm/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
Code to reproduce (from deeplearning.ai) ` import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Embedding, Bidirectional, LSTM
import numpy as np
tokenizer = Tokenizer() data="In the town of Athy one Jeremy Lanigan \n Battered away til he hadnt a pound. \nHis father died and made him a man again \n Left him a farm and ten acres of ground. \nHe gave a grand party for friends and relations \nWho didnt forget him when come to the wall, \nAnd if youll but listen Ill make your eyes glisten \nOf the rows and the ructions of Lanigans Ball." corpus = data.lower().split("\n") tokenizer.fit_on_texts(corpus) total_words = len(tokenizer.word_index) + 1
print(tokenizer.word_index) print(total_words)
input_sequences = [] for line in corpus: token_list = tokenizer.texts_to_sequences([line])[0] for i in range(1, len(token_list)): n_gram_sequence = token_list[:i+1] input_sequences.append(n_gram_sequence)
max_sequence_len = max([len(x) for x in input_sequences]) input_sequences = np.array(pad_sequences(input_sequences, maxlen=max_sequence_len, padding="pre"))
xs, labels = input_sequences[:, :-1], input_sequences[:, -1] ys = tf.keras.utils.to_categorical(labels, num_classes=total_words)
model = Sequential() model.add(Embedding(total_words, 64, input_length=max_sequence_len-1)) model.add(Bidirectional(LSTM(20))) model.add(Dense(total_words, activation="softmax"))
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"]) history = model.fit(xs, ys, epochs=500, verbose=1) `
I also got the same error with just simple GRU funcution, if only Dense, it is ok. " NotImplementedError: Cannot convert a symbolic Tensor (gru/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported " This is simple code: feature_input = keras.Input( shape =(None,257) , name = 'feature_input') gru1 = keras.layers.GRU(257, return_sequences=True)(feature_input) out = keras.layers.Dense(257,activation='sigmoid')(gru1) model = keras.Model( feature_input , out)
My issue has been fixed in 01-alpha3.
gru/PartitionedCall:0 succeeded
gru_1/strided_slice_3:0 succeeded
gru_2/PartitionedCall:0 succeeded
gru_3/strided_slice_3:0 succeeded
gru_4/strided_slice_3:0 succeeded
Description
When GRU units are initialized with non-default values of "activation", "recurrent_activation" and "recurrent_dropout" causes a
tf.errors.InvalidArgumentError
during training.Error
Workaround
If you experience this issue yourself, the only workaround I have found is to only use the default activation, recurrent activation and recurrent dropout. Default activation functions are a minor concern but the inability to use dropout on recurrent GRUs is a big limitation.
Code to Reproduce
Expected Erroneous Output