sparsemat / sprs

sparse linear algebra library for rust
Apache License 2.0
381 stars 45 forks source link

fully generic MulAcc<A,B> - groundwork for Fully Generic Matrix/Vector element, Product and Accumulator Types - #284

Closed experiment9123 closed 3 years ago

experiment9123 commented 3 years ago

Partial implementation of #283

mulimoen commented 3 years ago

This looks great! Any clea rbenefit in using two type bounds in MulAcc? Is multiplying i16 by i32 and outputting i64 something we should implement, or is it enough to have just MulAcc<N>?

It would be awsome to see this working with your use-case, could you make an impl in src/mul_accand implement e.g. MulAcc<i8, i8> for i16 to show the full benefit? Otherwise one must create a newtype if one wishes to use this due to orphan rules.

experiment9123 commented 3 years ago

thanks for reviewing the changes

This looks great! Any clea rbenefit in using two type bounds in MulAcc? Is multiplying i16 by i32 and outputting i64 something we should implement, or is it enough to have just MulAcc<N>?

My use case (explained below) definitely does need 2 types. EDIT: I've defaulted pub trait MulAcc<SrcA = Self, SrcB = SrcA>, if that seems friendlier. Then the user can chose 'impl MulAcc for MySpecialType {} , or MulAcc for MyOutput{} or do both.

(EDIT2: r.e. "orphan rules.." we'll need to make impls' for the combinations for primitive types? just tried that and it didn't let me -perhaps conflicting with 'MulAcc', it might need newtypes all the way)

bear in mind Dimension Checked maths aswell which some people like .. I see Rust now has const generics (yay!) making that possible, I've seen people write physics engines with that used all the way through. Also const generics let you write nice generic Fixed Point maths which could also go in here.

It would be awsome to see this working with your use-case, could you make an impl in src/mul_accand implement e.g. MulAcc<i8, i8> for i16 to show the full benefit? Otherwise one must create a newtype if one wishes to use this due to orphan rules.

my use case is speculative , but let me elaborate: [1] I wrote a Graph<Node,Edge> starting out on a spiking neuron framework experiment (trying to push rust on those peeps) [2] i then realised the critical step of information flowing across edges was actually the same pattern as Sparse Matrix X Sparse Vector. so I reworked this to use a mini sparse-matrix impl (COO format),then stored a SparseMatrixCOO<Edge>, with Edge*Cell producing the 'spike'. I was actually testing this update with Conways GoL for something more recognisable. I only did SparseMatrix<Edge> X DenseVector<Cell> - and figured "ok i'll look for an existing matrix lib to do actual sparse vectors" - and sure enough I see SPRS appears to have this. [3] I tried Eigen in C++ for the same idea and found they can indeed do it (you just need to supply some messy C++ style 'type traits') [4] In graphics I have seen cases like f16 matrix X 32bit coordinates, or 32bit matrices X 16bit compressed coords or 16x16bit but with different Fixed Point precisions. the permutaions that have appeared through the ages are endless. [5] you could also do "binary matrices" ,i.e. "weights" are one or zero, just have a newtype for a "one" (no data) with an assumption absense is zero.. reusing all the logic for sparse edge searches (infact this is what my current Conways GoL example really is) [6] I speculate that a mix of "sparse & dense" e.g. sparse matrix of dense 4x4 blocks could be implemented if we keep the maths non commutative, e.g. throwing 4x4 matrices in as the type (but I haven't checked that all the way through)

So to get as far as using "sprs" in my use case , I will need to refactor all steps to get the matrix mul working. I estimate it's a few PRs (too much for one "sitting",reading code is hard) to get it done - I'd worry about doing it all in one go (the project can diverge, then you get merge hell, and it'll be harder to verify each change.)

the work I've done so far on it shows it's definitely possible, but it reuiqres a few reworks here and there (I saw another place where it looked like it did "to_owned()" to return an empty vector "if src was empty", like it was cloning the input, have to stop think to get the type right there.. etc)

https://github.com/experiment9123/graphengine/blob/master/rustgraph/src/lib.rs https://github.com/experiment9123/graphengine/blob/master/rustgraph/examples/gol.rs

mulimoen commented 3 years ago

If you rebase on master you'll get the CI fixes. If you could add a test where you use the MulAcc trait on a newtype, with A/=B/=N would be great for this PR

experiment9123 commented 3 years ago

If you rebase on master you'll get the CI fixes. If you could add a test where you use the MulAcc trait on a newtype, with A/=B/=N would be great for this PR

ok i've added a test like this - mul_acc_mixed_param_sizes , impl MulAcc<Wrapped<i32>, Wrapped<i16>> for Wrapped<i64> {... (int values for precise equality test)

mulimoen commented 3 years ago

I took the liberty of rebasing and fixing some minor nits. Thank you @experiment9123 for your work on this! I'm excited to see this implemented on Mul for matrices.

experiment9123 commented 3 years ago

thats great.. i'll grab the latest version and try and make my way back through the callgraph till we can use this in sparse matrix mul X sparse vec.. i suspect it'll be several PR's yet, but we might get it for a user-facing "sparse dot product" first..