Closed tonyday567 closed 8 years ago
Thanks for looking into this! I'd love to get something like this merged in if we can fix the issues I brought up in the inline comments.
Your intuition was spot on. Neat looking code, a tiny ValidMatrix, no MatrixField and mmult might even work! I added an example as well to test with. Try as I might though, I couldn't wrangle mmult into being category friendy.
ping!
Whoops! I totally missed this. Thanks for following up.
It's a bit hard for me to read the pull request now that it's scattered across two commits. Do you think you could squash them together for me?
Also, do you have any intuition about why the category instance isn't working? I'd like to get that fixed too before accepting the pull request.
So I can get a category instance, once wrapped with an Id and Zero (which I called Matrix' to avoid name clash with +>). What I can't work out is this next step - how to make a Matrix' vect r a b into an a +> b.
I also think Id in Matrix' should be type Matrix' vect r (a::Symbol) (b::Symbol) - id doesn't need to be square. But it doesn't compile.
Also I was wondering about all the (a::Symbol) (b::Symbol)
noise, and whether it can all be done as just a b
I also think Id in Matrix' should be type Matrix' vect r (a::Symbol) (b::Symbol) - id doesn't need to be square. But it doesn't compile.
An identity matrix will always be square.
Also I was wondering about all the (a::Symbol) (b::Symbol) noise, and whether it can all be done as just a b
Ideally, there'd be two cases. If the user uses a Symbol
kind, then we don't know the dimensions at compile time, so we have to store the dimensions in memory. This gives us a slight overhead in terms of both memory usage and cpu time. If the user does know the size at compile time, then they could use a Nat
instead. This wouldn't have to be stored in memory, and the number itself actually gets baked into a cpu instruction. I've done some (not-yet-public) tests and found that GHC is able to move the Nat
in the Vector
types actually into one of the operands of an assembly instruction. This can really improve cache efficiency and make the program quite a bit faster for small arrays.
I'd say supporting the Symbol
vs Nat
distinction is a nice to have, but not need to have feature for now. There should be a way to arrange the code so that there's not separate instances needed for Symbol
vs Nat
matrices, but I haven't thought yet about the best way to do this.
All good then, except for how to turn a Matrix' vect r a b into a a +> b?
Thanks! Looks great! I'll have to think a bit about a better way to architect (+>)
.
Also, sorry I didn't see this until now... somehow github isn't sending me all the email updates that I think it should be...
As a useful follow on, I've added issue #37 that I think partly motivated this pull request.
I had a go at a native subhask matrix and got to mmult (see last line of Matrix.hs). I just couldn't close the lid on the +> category.