Closed Ddelval closed 2 years ago
Base: 84.96% // Head: 84.97% // Increases project coverage by +0.00%
:tada:
Coverage data is based on head (
435464e
) compared to base (46fc684
). Patch coverage: 100.00% of modified lines in pull request are covered.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
@vnmabus I am not sure if assert_all_close_normalized
should be defined in the same .py file since it is a functionality that might be useful in other tests.
Also, I have changed all the calls to np.assert_all_close
that used a custom rtol
or atol
value since I believe it is easier to understand how big the tolerance being used is using this normalization. However, I have not modified the calls to np.assert_all_close
that use the default tolerances since those are so small that it can be understood as the arrays being practically equal. Let me know if you think this is the right approach.
By taking into account the L1 norm of the vectors to be compared, it is possible to use smaller tolerances.
In PCA, small values in the scores or loadings have little significance. Therefore, the elements of the vectors that are small compared to the rest can be compared with a higher tolerance. This is accomplished by setting the absolute tolerance of each element to be a fraction of the L1 norm of the vector divided by the number of elements.
This change was required due to differences between scikit-fda and fda.usc when it comes to the use of numeric quadratures. In particular, fda.usc always uses Simpson's weights for the inner products (see inprod.fda.R:87 in fda.usc source), while the calculation of the principal components might use uniform weights (fdata2pc.R:429 in fda.usc source). In scikit-fda, we always use the same weights for both operations.