Currently, it's not very clear how to use the sequence length functionality in TensorFlowRNNClassifier (link). This currently takes a tensor as an argument, since the argument is eventually passed along to tensorflow.nn.rnn, which expects an integer tensor. However, it's not clear how one would define that sequence length tensor since the input placeholder, from which one would compute it, is defined in the fit method using _setup_training.
It seems like maybe sequence_length should take a function whose input is the input placeholder and whose output is a tensor, rather than taking a tensor directly, similar to how the input_op_fn argument works.
If there is some way currently to use the sequence_length argument to specifying the sequence lengths for items in a minibatch, it'd be great to have an example using it. Currently, the examples don't use sequence lengths (example), which I think means that the RNN just reads in 0s at the ends of inputs that are padded up to the maximum sequence length.
Currently, it's not very clear how to use the sequence length functionality in
TensorFlowRNNClassifier
(link). This currently takes a tensor as an argument, since the argument is eventually passed along totensorflow.nn.rnn
, which expects an integer tensor. However, it's not clear how one would define that sequence length tensor since the input placeholder, from which one would compute it, is defined in thefit
method using _setup_training.It seems like maybe
sequence_length
should take a function whose input is the input placeholder and whose output is a tensor, rather than taking a tensor directly, similar to how theinput_op_fn
argument works.If there is some way currently to use the
sequence_length
argument to specifying the sequence lengths for items in a minibatch, it'd be great to have an example using it. Currently, the examples don't use sequence lengths (example), which I think means that the RNN just reads in 0s at the ends of inputs that are padded up to the maximum sequence length.