Closed shenhuinuist closed 7 years ago
We will provide sparse train demo with local and cluster mode in few days.
Sparse Train in PaddePaddle is beyond two sparse matrix multiplication, further implies design in Parameter Server Architecture, SGD Optimisation, etc. So it is not directly do performance comparison with only MKL library.
I mean that just consider the two sparse matrix multiplication, which way is faster? using MKL or using mul function in paddle?
Pull request for Sparse Doc: https://github.com/baidu/Paddle/pull/144, which is under merging. You can preview it here: https://github.com/backyes/Paddle/blob/sparse_train_doc_demo/doc/cluster/opensource/cluster_train.md
We will update you performance data if the fine grained pure sparse matrix performance data is available.
Thanks!
Hi, can you provide a demo to show how to use sparse training ? And according to Paddle's documentation , there are four kinds of input : dense_vector , sparse_binary_verctor, sparse_float_vector and integer, do you take sparse_int_vector into account ? I find paddle implement sparse matrix multiplication wihtout calling MKL library. I want to konw wherther the implementation of sparse matrix multiplication in paddle is faster than calling MKL library ?