mratsim / Arraymancer

A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends
https://mratsim.github.io/Arraymancer/
Apache License 2.0
1.34k stars 95 forks source link

Overview of the fastest CPU RNNs implementation #228

Closed mratsim closed 6 years ago

mratsim commented 6 years ago

RNNs and particularly LSTM and GRU made a significant contribution to deep learning applications.

They are the default go-to tool for natural language processing, are heavily explored in reinforcement learning, many visual+text combined tasks and time-series prediction (though in competition with WaveNets)

CuDNN implementation is already heavily optimized however CPU implementation should be the fastest possible as well.

General overview

Readable implementation

"Unreadable" C++ implementations (static graphs)

Benchmarks

Unfortunately only GPU benchs are available:

Optimized implementations

Note on biases and equations

The various implementations do not agree on biases, and the equations chosen.

To allow loading weights on both CPU and GPU, it would be best to use the same equations as CuDNN.

List of relevant issues:

sclee15 commented 6 years ago

Yes.. I second for this feature!

mratsim commented 6 years ago

I have GRU Cells forward and backprop mostly working and tested.

I tried to implement the optimizations mentionned by Silicon Valley AI lab/Baidu Research here and asked for clarification because their GRU variant 4 claims "more speed, same memory usage" seem to actually be "more speed, more memory usage".

https://github.com/svail/diff_graphs/issues/2

I will probably implement forward, backward and inference primitives for all RNNs (all layers?) as there are huge gain to be had if we can re-use/destroy the input tensors or at least the intermediate gates during inference when there is no need for backprop.

mratsim commented 6 years ago

Tracking more implementations.

There is an ongoing rewrite of MxNet CPU RNNs using Fused kernels:

They can serve as a reference benchmark.

I also noticed that there is experimental RNN Cell support in MKL DNN introduced here https://github.com/intel/mkl-dnn/commit/f35779d62a0b3a2e0f6be79a647b1e3acf02129b. Not too sure how it relates to https://github.com/intel/mkl-dnn/issues/218

mratsim commented 6 years ago

The GRU Cell, forward, backward and inference are fully implemented with tests in #231.

Now I'm implementing GRU (in a fused manner), however some question are unresolved:

pengzhao-intel commented 6 years ago

@mratsim, @TaoLv can answer the parts of your questions based on our CPU implementation principle.

TaoLv commented 6 years ago

@mratsim I don't quite understand the weight reusing between cpu and gpu. Do you mean weights trained on gpu cannot be applied to cpu just because cpu and gpu implementations have different equations? If so, how does tensorflow handle this situation? AFAIK, tensorflow has different equations with cudnn but it also has integrated cudnn. For the input data layout, I guess time major will show better performance on both cpu and gpu. Actually, mxnet will perform a reshape for batch major input. https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/rnn/rnn_cell.py#L677 Although, I think this kind of reshape or layout change can be hidden in cpp code or dnn library for better performance. For variable length input, I have no idea about how can framework perform high efficiency parallel computation to those packed input if they are not well aligned.

TaoLv commented 6 years ago

As nv and baidu's blogs said, cudnn's equations are more friendly for optimization. But I'm wondering if there are any accurary differences between these two sets of equation.

mratsim commented 6 years ago

@TaoLv @pengzhao-intel

Thanks for dropping by, regarding weights reuse, Keras plain prevents sharing CuDNN and CPU weights and they reimplemented a CPU version compatible with CuDNN.

Now in the grand scheme of things, I suppose they can actually be re-used and the first couple batches will act like transfer learning/domain adaptation for CNNs.

Regarding accuracy, Baidu's and Nvidia tests showed that there is almost no accuracy difference. This paper even showed 3 much more radical variants that only took into account the last hidden state and 2 of them performed just as well as the fully gated GRU. Equations from Wikipedia article.

Regarding time-major speed, it was indeed my feeling.

For variable-length inputs, I suppose we have to wait for CuDNN 8.

mratsim commented 6 years ago

A quick survey last Sunday among Kaggle data scientists (including masters and grandmasters) show that Batch-major is favored 4-0 (there is one vote by me in both sections to ease voting):

2018-05-15_15-40-04

mratsim commented 6 years ago

New paper LSTM benchmarks of deep learning frameworks: https://arxiv.org/pdf/1806.01818.pdf

2018-08-31_20-28-41