Closed marcelo-alvarez closed 1 year ago
@dmargala do you have cycles to look at this? We're perhaps being too picky in how close the arrays need to match to pass the test, but presumably they passed when the tests were originally written, so it is worth a little thought and comparison to other studies where we signed off on gpu_specter as being close enough to use in production.
@sbailey I'll take a look today. There's not too much info in the output of the first error to go on. The second failure definitely looks like it might be too picky:
np.testing.assert_allclose(flux0, flux1, rtol=1e5*eps_double, atol=0, err_msg=f"where: {where}")
...
Not equal to tolerance rtol=2.22045e-11, atol=0
where: (array([1]), array([2]))
Mismatched elements: 1 / 250 (0.4%)
Max absolute difference: 7.03266778e-10
Max relative difference: 2.40644863e-11
Fixed by PR #79. Closing.
The current main branch fails unit tests on an interactive node of Perlmutter, in particular
gpu_specter.test.test_core.TestCore
andgpu_specter.test.test_extract.TestExtract
.@dmargala do you know whether this an actual failure with gpu_specter or rather in the way the unit test is being set up?
Commands to reproduce failures on Perlmutter: