Closed sepro closed 1 year ago
Merging #95 (7eebf7c) into dev (2a5ebe1) will decrease coverage by
0.00%
. The diff coverage is96.55%
.
@@ Coverage Diff @@
## dev #95 +/- ##
==========================================
- Coverage 97.04% 97.04% -0.01%
==========================================
Files 29 29
Lines 1898 1927 +29
==========================================
+ Hits 1842 1870 +28
- Misses 56 57 +1
Impacted Files | Coverage Δ | |
---|---|---|
statannotations/Annotator.py | 91.48% <66.66%> (-0.18%) |
:arrow_down: |
statannotations/Annotation.py | 100.00% <100.00%> (ø) |
|
statannotations/stats/StatResult.py | 92.30% <100.00%> (+0.81%) |
:arrow_up: |
tests/test_annotation.py | 100.00% <100.00%> (ø) |
|
tests/test_stat_result.py | 100.00% <100.00%> (ø) |
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
Refactored code to make it test-able and added tests to have full coverage on the new lines. @trevismd could you check the logic in stats.StatResult.is_significant()
? I'm not 100% sure if self._corrected_significance
is used as intended here.
@trevismd finished refactoring the code in a few spots to get it working again after pulling in your suggestions. Also split the unit-test and put functions in alphabetical order per your other suggestions.
Think that covers all comments.
Happy holidays !
I tested it, it works like a charm
Thanks Sepro for the nice feature
Following the discussion with @trevismd here the parameter show_non_significant has been refactored to hide_non_significant, threshold is now correctly based on alpha and (corrected) p-value. Type of the result instance is checked and a warning is thrown when this is not a StatResult.
I've tested this locally and it works, though a unit test still needs to be added cover these few lines of code. How would you like this to be tackled.