Closed BjoernLudwigPTB closed 2 years ago
Merging #287 (6663bcd) into main (c264800) will increase coverage by
16.77%
. The diff coverage is100.00%
.:exclamation: Current head 6663bcd differs from pull request most recent head 3e37e33. Consider uploading reports for the commit 3e37e33 to get more accurate results
@@ Coverage Diff @@
## main #287 +/- ##
===========================================
+ Coverage 60.42% 77.20% +16.77%
===========================================
Files 29 29
Lines 2239 2233 -6
Branches 366 363 -3
===========================================
+ Hits 1353 1724 +371
+ Misses 783 382 -401
- Partials 103 127 +24
Impacted Files | Coverage Δ | |
---|---|---|
src/PyDynamic/model_estimation/fit_filter.py | 90.34% <100.00%> (+0.68%) |
:arrow_up: |
src/PyDynamic/uncertainty/interpolate.py | 90.12% <100.00%> (+1.61%) |
:arrow_up: |
src/PyDynamic/misc/filterstuff.py | 63.93% <0.00%> (+1.63%) |
:arrow_up: |
src/PyDynamic/uncertainty/propagate_convolution.py | 100.00% <0.00%> (+4.00%) |
:arrow_up: |
src/PyDynamic/uncertainty/propagate_filter.py | 78.46% <0.00%> (+14.35%) |
:arrow_up: |
src/PyDynamic/uncertainty/propagate_DFT.py | 65.57% <0.00%> (+18.30%) |
:arrow_up: |
src/PyDynamic/misc/SecondOrderSystem.py | 100.00% <0.00%> (+45.45%) |
:arrow_up: |
src/PyDynamic/uncertainty/propagate_MonteCarlo.py | 68.62% <0.00%> (+45.88%) |
:arrow_up: |
... and 2 more |
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
We observed a lot of errors, that occurred very frequently. This PR is meant to identify the causes and resolve the issues.
test_dft_deconv()
needed another increase of the threshold to test its results are correctinterp1d_unc()
the difference of the orders of magnitude of the input signal and interpolation nodes must not be too large, to stay precise with the calculations results, so we adapted the strategyvalues_uncertainties_kind()
to ensure, the values can be requested to stay in a smaller bounded region than in general cases.interp1d()
requests the values to be different, which required another adaption of the strategy.make_equidistant()
calls caused out of memory errors, when they became too small, so we do not draw them randomly anymore, but compute them from the amount of interpolation nodes to spread them more or less evenly in the randomly drawn interpolation interval.interp1d()
does not guarantee anymore, that interpolated values strictly stay within the original bounds, so we had to include a check intest_linear_in_interp1d_unc()
for the values at least being close to the original bounds if not within.test_wrong_input_lengths_call_make_equidistant()
just because it goes.test_firuncfilter_mc_uncertainty_comparison.py
tau
compared to the original implementation. This is fixed now in 6286c754d7aad93ca903270b74b9cef15c494207.