Open Gu-Youngfeng opened 5 years ago
I have searched this problem over the StackOverflow, such as
How to handle padding when using sequence_length parameter in TensorFlow dynamic_rnn?
There is answer feeds the sequence_length
with an array. Here is the code they recommended,
seq_length_batch = np.array([2, 1, 2])
seq_length = tf.placeholder(tf.int32, [None])
...
outputs, states = tf.nn.dynamic_rnn(basic_cell, X,
sequence_length=seq_length, dtype=tf.float32)
...
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
outputs_val, states_val = sess.run([outputs, states], feed_dict={
X: X_batch,
seq_length: seq_length_batch
})
...
I tried it, but the ValueError error still exist.
The root cause of this ValueError is the wrong assignment of the feature_train
and the sequence_length
. The former needs a padding process and the latter should be a vector, the complete solution is in issue #3 .
It seems that this typical error has wildly happened in Tensorflow program. Unfortunately, this error also happened in our code (tensorflow_2.py), the full error traces are as follows,
the corresponding code is like that,