In the existing implementation of error normalisation, the normalisation factor depends on the test tolerance.
The error should be independent of the desired tolerance. One practical reason for this is so that a "correct" simulation can be run, the error determined, and then the tolerance set accordingly. If the error depends on the tolerance, then the test author would need to iterate to the correct tolerance.
In this new implementation, the normalisation factor is at minimum epsilon(valorg). Furthermore, for vector values, the normalisation factor is the magnitude of the vector, not just the component in the given axis.
Test tolerances adjusted. Notably, star_sph now runs with a much tighter tolerance.
In the existing implementation of error normalisation, the normalisation factor depends on the test tolerance.
The error should be independent of the desired tolerance. One practical reason for this is so that a "correct" simulation can be run, the error determined, and then the tolerance set accordingly. If the error depends on the tolerance, then the test author would need to iterate to the correct tolerance.
In this new implementation, the normalisation factor is at minimum
epsilon(valorg)
. Furthermore, for vector values, the normalisation factor is the magnitude of the vector, not just the component in the given axis.Test tolerances adjusted. Notably,
star_sph
now runs with a much tighter tolerance.