Closed etadobson closed 5 years ago
How do you get the reference values for the tests? Also, assertEquals(double, double, double)
should have a positive delta, not 0.0
, no?
How do you get the reference values for the tests?
Empirically. They are regression tests, to ensure behavior does not change.
Also,
assertEquals(double, double, double)
should have a positive delta, not0.0
, no?
The idea is to fail the test if the known good value is not exactly the same as expected, hence the use of a zero delta.
I ran the unit tests on my macOS system in Eclipse, and everything passed.
But on Travis, we see these problems:
LiICQTest.testPValue:107 expected:<0.82> but was:<0.87>
MTKTTest.testMTKTpValueImage:196 expected:<0.47> but was:<0.48>
To make the P-value tests faster, you can use fewer iterations (e.g. 10 instead of 100), and/or use smaller images. I did not dig in to testMTKTPosCorr
, so do not have advice about that.
In an effort to reproduce, I ran the tests on one of our Linux systems, and found different problems (!):
MTKTTest.testMTKTpValueImage:196 expected:<0.47> but was:<0.52>
MTKTTest.testMTKTpValueRandom:180 expected:<0.28> but was:<0.23>
DefaultPearsonsTest.testPValue:108 expected:<0.8> but was:<0.81>
It is unfortunate that the behavior appears to be platform-specific. We will need to troubleshoot that.
One additional issue: MTKTTest.testMTKTpValueImage
crops the big images to smaller images, so the test can run quickly. But the code surrounding the cropping is confusing and bears some simplification.
I think the problem lies in the DefaultPValue class ... generating seeds for each thread. If thread # changes - the calculated values change in these tests as well. So it's not necessarily a Linux versus Mac issue - at least at the moment. I'm on it. thanks @ctrueden !!!
I believe that this branch is ready to merge at this point... if folks don't mind taking another look - I'd appreciate it. 👍
This colocalization Op is based on the MTKT algorithm from Wang et al. (2017).