Closed mratsim closed 6 years ago
Yes.. I second for this feature!
I have GRU Cells forward and backprop mostly working and tested.
I tried to implement the optimizations mentionned by Silicon Valley AI lab/Baidu Research here and asked for clarification because their GRU variant 4 claims "more speed, same memory usage" seem to actually be "more speed, more memory usage".
https://github.com/svail/diff_graphs/issues/2
I will probably implement forward, backward and inference primitives for all RNNs (all layers?) as there are huge gain to be had if we can re-use/destroy the input tensors or at least the intermediate gates during inference when there is no need for backprop.
Tracking more implementations.
There is an ongoing rewrite of MxNet CPU RNNs using Fused kernels:
They can serve as a reference benchmark.
I also noticed that there is experimental RNN Cell support in MKL DNN introduced here https://github.com/intel/mkl-dnn/commit/f35779d62a0b3a2e0f6be79a647b1e3acf02129b. Not too sure how it relates to https://github.com/intel/mkl-dnn/issues/218
The GRU Cell, forward, backward and inference are fully implemented with tests in #231.
Now I'm implementing GRU (in a fused manner), however some question are unresolved:
What default between [Time/sequence, batch, features]
and [batch, time/sequence, features]
.
batch_first = true
time_major = true
xDesc Input. An array of fully packed tensor descriptors describing the input to each recurrent iteration (one descriptor per iteration). The first dimension (batch size) of the tensors may decrease from element n to element n+1 but may not increase. Each tensor descriptor must have the same second dimension (vector length).
but the forum questions here show that the current situation is confusing:
How to deal with variable-length sequences (for example sentences for machine translation). PyTorch pack_padded_sequence
and pad_packed_sequence
are generating a lot of questions and are probably not the best way to go with this.
@mratsim, @TaoLv can answer the parts of your questions based on our CPU implementation principle.
@mratsim I don't quite understand the weight reusing between cpu and gpu. Do you mean weights trained on gpu cannot be applied to cpu just because cpu and gpu implementations have different equations? If so, how does tensorflow handle this situation? AFAIK, tensorflow has different equations with cudnn but it also has integrated cudnn. For the input data layout, I guess time major will show better performance on both cpu and gpu. Actually, mxnet will perform a reshape for batch major input. https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/rnn/rnn_cell.py#L677 Although, I think this kind of reshape or layout change can be hidden in cpp code or dnn library for better performance. For variable length input, I have no idea about how can framework perform high efficiency parallel computation to those packed input if they are not well aligned.
As nv and baidu's blogs said, cudnn's equations are more friendly for optimization. But I'm wondering if there are any accurary differences between these two sets of equation.
@TaoLv @pengzhao-intel
Thanks for dropping by, regarding weights reuse, Keras plain prevents sharing CuDNN and CPU weights and they reimplemented a CPU version compatible with CuDNN.
- Keras: weights on GPU cannot be reused on CPU and solutions (i.e. redoing a CPU layer):
Now in the grand scheme of things, I suppose they can actually be re-used and the first couple batches will act like transfer learning/domain adaptation for CNNs.
Regarding accuracy, Baidu's and Nvidia tests showed that there is almost no accuracy difference. This paper even showed 3 much more radical variants that only took into account the last hidden state and 2 of them performed just as well as the fully gated GRU. Equations from Wikipedia article.
Regarding time-major speed, it was indeed my feeling.
For variable-length inputs, I suppose we have to wait for CuDNN 8.
A quick survey last Sunday among Kaggle data scientists (including masters and grandmasters) show that Batch-major is favored 4-0 (there is one vote by me in both sections to ease voting):
New paper LSTM benchmarks of deep learning frameworks: https://arxiv.org/pdf/1806.01818.pdf
RNNs and particularly LSTM and GRU made a significant contribution to deep learning applications.
They are the default go-to tool for natural language processing, are heavily explored in reinforcement learning, many visual+text combined tasks and time-series prediction (though in competition with WaveNets)
CuDNN implementation is already heavily optimized however CPU implementation should be the fastest possible as well.
General overview
PyTorch equations
Note that in the paper equations are:
And CuDNN
Readable implementation
"Unreadable" C++ implementations (static graphs)
Benchmarks
Unfortunately only GPU benchs are available:
Optimized implementations
Note on biases and equations
The various implementations do not agree on biases, and the equations chosen.
To allow loading weights on both CPU and GPU, it would be best to use the same equations as CuDNN.
List of relevant issues: