Open stelfrich opened 8 years ago
ojAlgo is easy to integrate with. In many cases all you need is to implement the Access2D interface - and that's not difficult!
I'm the author of ojAlgo. If there's anything about ojAlgo you think makes it unnecessarily difficult to integrate with I'd like to know about it.
In many cases all you need is to implement the
Access2D
interface - and that's not difficult!
Thanks for getting back to us @apete, I did not expect that! :+1:
Looking at the Access2D
interface brings back some thoughts for our use case: the biggest concern is the typing of the interface (i.e. <N extends Number>
). In order to comply with that, we'd have to create new instances of Number
subtypes for all operations due to how data storage and access is implemented in imglib2, i.e. Type
s are proxies to the underlying datastorage. While that's certainly doable, it's currently out of scope (at least for me).
I might find some time in fall to look into this again..
In order to comply with that, we'd have to create new instances of Number subtypes for all operations due to how data storage and access is implemented in imglib2
Are you sure about that? The main interface method in question is:
N get(long row, long column);
And we could certainly implement that without needing to actually cut new subclasses of Number
. That is: nearly all of the ImgLib2 types cleanly map to some already-existing subclass of Number
—e.g., FloatType
➡️ Float
.
Of course, actually calling get(row, col)
in a loop over all the pixels would be horribly more expensive than necessary due to all the boxing that would be done. But whether that happens really depends on where you hand the Access2D
, and what the consuming code does with it. Right?
Or am I missing some subtlety here?
Alternately, we could have a single subclass of Number
which wraps a T extends NumericType<T>
or maybe T extends RealType<T>
...
I don't know what you want/need, but I assume some of the matrix decompositions would be useful. If you create a class that implements ElementsSupplier you can supply whatever elements you have directly to the matrix decomposition implementation's internal data structure. (I actually wrote this before @ctrueden commented.)
And we could certainly implement that without needing to actually cut new subclasses of
Number
.
Sorry for the confusion @ctrueden, I was talking about instances not subclasses. That is, we'd have to do a lot of boxing / unboxing / mapping between datatypes, just like you pointed out.
I don't know what you want/need, but I assume some of the matrix decompositions would be useful.
That's true, @apete. There are two somewhat different discussion where ojAlgo might be interesting running in parallel:
MatrixType
discussed in this issue (which will mainly be used for storing information)@hanslovsky's implementation in https://github.com/imglib/imglib2-algorithm/pull/29 uses Apache Commons Math. His RealCompositeMatrix
implementation extends AbstractRealMatrix
and uses the library to do an eigenvalue decomposition.
Are there any thoughts on using Apache Commons Math for the matrix stuff @tpietzsch? It is an additional, massive dependency to pull in...
What about using jama for doing the eigenvalue decomposition? It's already a dependency...
http://math.nist.gov/javanumerics/jama/doc/Jama/EigenvalueDecomposition.html
Calling Apache Commons Math a "massive" dependency seems unfair—it has no dependencies after all.
Now jruby-core... that is a massive dependency! https://repo1.maven.org/maven2/org/jruby/jruby-core/9.1.5.0/jruby-core-9.1.5.0.pom
I used Apache Commons Math because it offers a Matrix interface rather than a class and eigenvalue decomposition for that interface. That allows for an implementation of Matrix as a view on existing data to avoid allocation and copy. As far as I can tell, Jama only has a class instead of an interface, which would render Jama useless for any usecase like the one referenced by @stelfrich (imglib/imglib2-algorithm#29).
As for the size of the Commons Math dependency, I am with @ctrueden . Plus, it is already managed by pom-scijava.
@haesleinhuepf I think we were hoping to move away from JAMA. See fiji/fiji#135. The consensus there was to use ojAlgo though, not Apache Common Math.
I think for the purpose of @hanslovsky PR, the Commons Math is fine. Interesting that it offers a Matrix interface, I wonder why we missed it in the discussion so far. Having experience with neither Apache Commons Math nor ojAlgo, I think there might be value in using both, if only for a while.
What would be really useful for my PR is an eigen-decomposition that reuses it's internal data members. Right now, I have to create a new eigen decomposition instance for each location. If I could just create one for the whole image and re-use that one that might be a performance boost for dimensions larger than 2
. Unfortunately, it seems that neither ojALGO or Commons Math offer this functionality.
fwiw, ejml's eigen decomposition (well, at least this one ) reuses its data members.
ojAlgo does reuse internal memory as much as it can. Just reuse the same MatrixDecomposition instance for your next decomposition...
BTW I've started an Apache Commons Math - ojAlgo integration project. There are just 2 small classes now, but they're very useful.
Specifically, I mean these lines:
https://github.com/optimatika/ojAlgo/blob/master/src/org/ojalgo/matrix/decomposition/RawEigenvalue.java#L1060-L1062
Vt
, d
, e
could be re-used without re-allocation, right?
The commons wrapper looks very useful, I will use it for a small benchmark for my use case.
FYI: This is the code example that I used to end up at RawEigenvalue.java
:
final Builder< PrimitiveMatrix > b = PrimitiveMatrix.FACTORY.getBuilder( 3, 3 );
for ( int i = 0; i < b.count(); ++i )
b.set( i, i );
final PrimitiveMatrix m = b.build();
final LogicalBuilder< Double > w = MatrixStore.PRIMITIVE.makeWrapper( m );
System.out.println( m );
System.out.println( w );
final Eigenvalue< Double > ev = Eigenvalue.PRIMITIVE.make();
Eigenvalue.PRIMITIVE.make( true );
ev.computeValuesOnly( w );
final Array1D< ComplexNumber > e = ev.getEigenvalues();
System.out.println( e );
In a first quick and dirty comparison (no real benchmark), ojAlgo was faster than commons-math by a factor of about 3.
In order to get the eigenvalues, I called getEigenvalues
which creates a new instance of Array1D
which could be avoided by calling getRealEigenvalues
and getImagEigenvalues
directly but these methods are currently private.
You can open issues at ojAlgo for suggested optimisations like that.
Regarding your code snippet:
No need to create PrimitiveMatrix and to use wrappers and builders... just create a PrimitiveDenseStore. That seems to be what you want/need.
When you instantiate matrix decomposition it's better if you supply a "typical matrix" to the factory. It will then return an implementation optimised for that matrix size (there are often 2 alternatives).
If you look at the RealMatrixWrapper class in that integrations project you have a template for writing ojAlgo adaptors. Just replace org.apache.commons.math3.linear.RealMatrix with anything else...
With optimatika/ojAlgo#33 and optimatika/ojAlgo#34 I vote for ojAlgo as matrix library. imglib/imglib2-algorithm#29 would only be merged after the changes to ojAlgo are released in a non-SNAPSHOT version.
We have talked about implementing new
Type
s for vectors and matrices that would ideally implement interfaces of a matrix library for more advanced computations (see also https://github.com/fiji/fiji/issues/135).I have started working on
DoubleMatrix2DType
as well asDoubleVectorType
without any further interface hierarchy likeMatrixType
orVectorType
at https://github.com/stelfrich/imglib2/tree/matrixTypeThere was a version of
DoubleMatrix2DType
that has implemented theFixedMatrix64F
interface of ejml which was a dead end. Internally, ejml relies heavily onDenseMatrix64F
which is a completely different part of the type hierarchy. After some discussion with @tpietzsch, it currently doesn't seem to be worth the hassle to integrate the matrix and vector types with for example ojAlgo.