Open Magalame opened 5 years ago
Thanks for the interest!
Unfortunately I'm currently on vacation cycling; I may have time in June to reply properly.
Note that linearmap-category is more of an interface library, for abstract linear algebra; matrix operations you might benchmark would really belong to a particular backend. free-vector-spaces would be what I've done myself in that direction. (But my focus with the library has been more to go beyond matrix-based LA, to infinite-dimensional functional analysis.)
Cheers,
Justus
Enjoy cycling!
I take it that linearmap-category
wouldn't be suited to such benchmarks? Although infinite dimensonal functional analysis will be exciting to see!
“ I take it that linearmap-category wouldn't be suited to such benchmarks? ”
Well – what you'd need to benchmark is the combination of linearmap-category with a matrix backend. Arguably, the library should ship with such a backend, so: yes it would actually be suited, just I've never made it ready for it.
I would be happy to take this as incentive for finally doing that. If you send me some reference benchmark code using hmatrix or NumPy or whatever, I could taylor something towards that.
Regards,
Justus
I'm interested! @leftaroundabout, I've enjoyed perusing your libraries, and the other "category" and linalg libraries in this space, and haven't been able to find a connection between all these concepts. I mean, an "elegant" linalg library with category instances, that can tie in to a fast library like hmatrix.
I'm happy to help out too, if I can get a little guidance since my math-fu is rather weak.
@freckletonj cool!
So you would be interested in writing a backend using hmatrix
? I'm not sure that's really the most useful in practice (for real performance I'm rather envisioning of something that could run on GPU, either using Accelerate or a library such as Torch), but hmatrix
would definitely be easier good to have if only for benchmark purposes.
Start by implementing vector-space
and free-vector-spaces
orphan instances. That shouldn't pose much problems or controversy, at least for hmatrix' static-size vector types. TensorSpace
unfortunately won't be as easy; in principle we'd want something like
type TensorProduct (HMatrix.R n) (HMatrix.R m) = HMatrix.L n m
...but that can't be done directly, because you need to support tensor products with any space w
.
Maybe it is possible to get this right using a type family
type family HMatRTensor n where
HMatRTensor n (HMatrix.R m) = HMatrix.L m n
-- Perhaps also a case for higher-rank tensors
HMatRTensor n w = [w] -- Fallback for tensors with non-hmatrix types, generic array of
-- row vectors. Maybe use `Data.Vector.Vector` here.
type TensorProduct (HMatrix.R n) w = HMatRTensor n w
Unfortunately implementing the methods will then require deciding whether or not w
is a HMatrix.R
, and Haskell is notoriously averse to such decisions. This is a broader issue that also applies to other backends. Possible approaches include
Typeable
a superclass of TensorSpace
. This is pretty straightforward, but of course hardly Haskell-idiomatic, and I'm not sure whether it can actually be used with polymorphism (which is needed if we have static dimensions).linearmap-category
and manifolds
library already make a lot of use of GADTs like LinearSpaceWitness
, which essentially defer type information to runtime. I've already thought of using something similar just for, specifically, Unbox
constraint (which would be useful for small fix-size vectors).(cont'd)
type TensorProduct (HMatrix.R n) w = [w]
would work as well; matrix multiplications would then not use hmatrix' operations but could still exploit BLAS-optimised additions. In fact, perhaps a good feature for performance of a wide range of backends would be an interface for mutable additionclass TensorSpace v where
...
type MVec m v :: Type
addMutScaled :: PrimMonad m => Scalar v -> v -> MVec m v -> m ()
to make [w]
-based matrix multiplications memory-efficient. (Such a method could be added without much compatibility concerns, as it could default to performing the additions purely and injecting the results back into the monad.)data HMatRTensor n w where
HMatrixTensor :: HMatrix.L n m -> HMatRTensor n (HMatrix.R m)
HMatGenericTensor :: [w] -> HMatRTensor n [w]
Could work pretty well in practice. What I don't like about it is that performance may unpredictably degrade because a HMatRTensor n (HMatrix.R m)
can just as well come wrapped in the HMatGenericTensor
constructor.Awesome! Thanks for the leads, I think that's enough to get started, and I'll try and find some time this weekend.
I've started adding the HMatrix backend here: https://github.com/leftaroundabout/linearmap-family/pull/6
I'll think some more about the TensorSpace
issues you've mentioned, as, I'm sure you will too. I have a little experience using singletons, though, not Richard's full-fledged library yet, so for now, I'll think down those lines since you suggested it might be the best option.
Hi!
I'm setting up a benchmark of different linear algebra libraries, and I was wondering how to produce a large matrix? Libraries often have a
fromList
method or similar.