imglib / imglib2-algorithm

Image processing algorithms for ImgLib2
http://imglib2.net/
Other
23 stars 20 forks source link

Add Hessian eigenvalues #29

Closed hanslovsky closed 7 years ago

hanslovsky commented 8 years ago

This pull request features:

For this purpose, these classes and methods were added:

Before merging this, please comment on the package structure:

Once the pull request is ready to be merged, I will rebase in my local repository and create a new pull request.

hanslovsky commented 7 years ago

@tinevez's commit https://github.com/tinevez/CircleSkinner/commit/84e45698ea4a36a74169eda339077f096d224e9f reminded me of this outstanding pull request. I should have some time next week to clean-up and simplify the interface. Specifically, I will change the interface such that all inputs must be supplied by the caller. That might be as extreme as having an interface that accepts only the gradient image. That way the number of overloads would be greatly reduced. Also, I would like to get rid of one of

func( args..., int nThreads, ExecutorService es)

and

func( args..., int nThreads )

Currently, I lean towards keeping the first option and removing the second one. Another option would be to get rid of the parallel computing in its entirety as this can be done by the caller by calculating the Hessian on Chunks of data. That would greatly reduce the number of overloads.

Thinking about the ImageJ picture, this should probably go into ops @ctrueden @dietzc . If you agree, I will write an op once this is merged in imglib2-algorithm. I will approach you with questions once I get there.

dietzc commented 7 years ago

Hi @hanslovsky,

I'd keep the variant with ExecutorService which is a pattern used in other imglib2-algorithms, too (for example DoG). Having a wrapper in Ops would of course be great and I'm happy help to help you with that!

tinevez commented 7 years ago

Hi @hanslovsky , Great news! I am sure that at the very least Hessian and partial derivatives will be a of great interest for people developing algorithms.

Here is a question regarding the Hessian calculation. In the HessianMatrix class, you offer the possibility to specify a non-isotropic smoothing. This is great because then we accommodate e.g. 3D images for which the Z pixel size is different ant the X and Y pixel size (95% of confocal images).

But for the Hessian to be calculated correctly, the partial derivatives should be scaled accordingly too right?

hanslovsky commented 7 years ago

@dietzc What I don't like about the ExecutorService pattern is that we have to somehow arbitrarily pick the number of tasks, e.g. as in DoG:

// FIXME find better heuristic?
final int numThreads = Runtime.getRuntime().availableProcessors();
final int numTasks = numThreads <= 1 ? 1 : numThreads * 20;

I will keep the ExecutorService pattern but add a nTasks parameter so the caller has to decide how many tasks he'd like to run.

@tinevez I will have to think about that!

hanslovsky commented 7 years ago

I finished the changes to the interfaces and package structure. In detail: I moved stuff to different packages, everything relative to net.imglib2.algorithm:

A high level overview of the changes to HessianMatrix and TensorEigenValues:

If there is a consesus that this PR is ready for a merge I will do a rebase for a cleaner history.

hanslovsky commented 7 years ago

@tinevez We could add an overload that takes the step size for the finite differences to net.imglib2.algorithm.gradient.PartialDerivative. Currently it is set to 1 by default. This would allow for arbitrary step sizes in the partial derivatives but increase the number of overloads by a factor of 2 (for PartialDerivative and HessianMatrix) unless it is a required parameter.

@dietzc Once this is merged I will look into making an op from this.

hanslovsky commented 7 years ago

@tinevez I think the appropriate way to get partial derivatives right is to re-sample the (smoothed) input image for isotropy. I would leave it to the caller to do that. What I add instead is a View that takes a Hessian matrix as input and scales the second derivatives according to a sigma parameter (the same you would use for smoothing), something like this:

public static < T extends RealType< T > > AbstractConvertedRandomAccessible< T, T > scaleHessian( final RandomAccessible< T > hessian, final T t, final double... sigma )
{
    final double minSigma = Arrays.stream( sigma ).reduce( Double.MAX_VALUE, ( d1, d2 ) -> Math.min( d1, d2 ) );
    final double minSigmaSq = minSigma * minSigma;
    final double[] scales = new double[ sigma.length * ( sigma.length + 1 ) / 2 ];
    for ( int i1 = 0, k = 0; i1 < sigma.length; ++i1 )
        for ( int i2 = i1; i2 < sigma.length; ++i2, ++k )
            scales[ k ] = sigma[ i1 ] * sigma[ i2 ] / minSigmaSq;
        final AbstractConvertedRandomAccessible< T, T > wrapped = new AbstractConvertedRandomAccessible< T, T >( hessian )
    {
        @Override
        public AbstractConvertedRandomAccess< T, T > randomAccess()
        {
            return new AbstractConvertedRandomAccess< T, T >( hessian.randomAccess() )
            {
                private final T tt = t;
                @Override
                public T get()
                {
                    tt.set( source.get() );
                    tt.mul( scales[ source.getIntPosition( sigma.length ) ] );
                    return tt;
                }

                @Override
                public AbstractConvertedRandomAccess< T, T > copy()
                {
                    // TODO Auto-generated method stub
                    return null;
                }
            };
        }

        @Override
        public AbstractConvertedRandomAccess< T, T > randomAccess( final Interval interval )
        {
            return randomAccess();
        }
    };
    return wrapped;
}

Note that the scaling factors are normalized by the smallest sigma squared. The scaling has to be quadratic because the hessian matrix contains second derivatives. This should have minimal overhead and you could even write the contents of the view back into the hessianMatrix in a single pass without allocation of extra memory. Let me know what you think.

hanslovsky commented 7 years ago

I added a convenience method to scale the hessian matrix based on a sigma[] parameter passed by the caller (cf the last commit @tinevez ). I also added the missing license headers (how did I miss them?)

Is this ready to be merged now?

tinevez commented 7 years ago

I could use this PR, once merged, in imagej-ops to port the tubenessop.

hanslovsky commented 7 years ago

ojAlgo v43 was just released: optimatika/ojalgo@5487fa89324cdf077b2c4b6ec1b91f7dfc784b0c I will update the tensor eigenvalues and matrices to use ojAlgo as backend instead of commons-math and then we should be able to merge this (after further review).

Update: Once this is confirmed for a merge I will make a rebase into a single commit.

axtimwalde commented 7 years ago

@/all Is this ready for a merge?

hanslovsky commented 7 years ago

It is from my side (after a rebase, of course).

tpietzsch commented 7 years ago

I'm looking at it now

hanslovsky commented 7 years ago

41 is the same pull request but from a rebased branch to have a single commit.