pytorch defaults to (seq_len, batch, input_size) in their recurrent modules. Its possible to change this to batch first, but I measured a significant slowdown.
Why not allow the user to provide a the axis for concatenation when using the buffered streamer for convenience.
pytorch defaults to
(seq_len, batch, input_size)
in their recurrent modules. Its possible to change this to batch first, but I measured a significant slowdown.Why not allow the user to provide a the axis for concatenation when using the buffered streamer for convenience.
something like
pescador.buffer_stream(streamer, minibatch_size, axis=1)
which could use
np.swapaxis
or some other tricks inside pescador to stack the samples.