rust-ml / linfa

A Rust machine learning framework.
Apache License 2.0
3.78k stars 254 forks source link

Investigate discrepancy between non-BLAS and BLAS versions of `linfa-linear` #277

Open YuhanLiin opened 2 years ago

YuhanLiin commented 2 years ago

According to the benchmark results from this comment, linfa-linear is faster with BLAS than without. The OLS algorithm isn't too different with 5 features and is only slightly slower with 10 features, but GLM is significantly slower without BLAS. We should profile and investigate the difference between the BLAS and non-BLAS performance.

oojo12 commented 2 years ago

Flamegraphs from profiling attached. I haven't studied profiling or flamegraphs to build a firm foundation yet so I can't provide any insights. Each profile was run for 1min. Linfa_linear.zip

oojo12 commented 2 years ago

did a quick review. 10 feets GLM 100_000 samples is spending most of its CPU time at this step ndarray::zip::Zip<(P1,P2),D>::for_each this is called twice. Both calls share an ancestor of argmin::core::executor::Executor<O,S>::run

One flow has <argmin::solver::linesearch::morethuente::MoreThuenteLineSearch<P,F> as argmin::core::Solver<O>>::next_iter called before their common <linfa_linear::glm::TweedieProblem<A> as argmin::core::ArgminOp>::gradient operation. This fork in processing leads to the ndarray::zip::Zip<(P1,P2),D>::for_each being called twice. If it is possible to consolidate the calls so that the gradient operation is called once or invoke some sort of caching (if applicable not sure if it will be if we're using gradient descent) then I think performance would increase.

A screenshot of the graph is attached.

profile-linear

oojo12 commented 2 years ago

Also if we use the rayon backend we could look into using par_azip instead of zip. It is the parallel version.

YuhanLiin commented 2 years ago

for_each is being called by general_mat_vec_mul_impl, which is the pure Rust matrix multiplication routine from ndarray. AFAIK it's the only difference between BLAS and non-BLAS for GLM, since BLAS uses a different matrix multiplication routine. One way to improve this is to call the multiplication less often. We can also try to enable the matrixmultiply-threading feature on ndarray to improve the matrix multiplication speed. Do we also know if the BLAS version has the same bottlenecks?

oojo12 commented 2 years ago

I think calling multiplication less often would likely be the bigger performance boost. I can locally test enabling the matrixmultiply-threading feature.

I didn't profile the BLAS version. Is this desirable or just out of curiosity?

YuhanLiin commented 2 years ago

Would be useful to see if multiplication is also the bottleneck for BLAS, since BLAS also calls multiplication in two places.

oojo12 commented 2 years ago

Okay I'll 2 more. One with blas and one with that the matrixmultiply feature.

oojo12 commented 2 years ago

flamegraph Blas flamegraph a lil different then the other

YuhanLiin commented 2 years ago

The fact that gradient appears twice in the first benchmark is likely due to a sampling inconsistency, not different behaviour. The second flamegraph also has two instances of gradient. The interesting thing is that first few calls to dot perform similarly in both graphs, but the final dot in gradient is where it slows down. Dimensionally, that dot is a matrix-multiplication of y and X, which means it linearly combines each feature column (which have 100k elems). I think the non-BLAS dot doesn't scale well for inputs of that size.