NDCLab / pepper-pipeline

tool | Python Easy Pre-Processing EEG Reproducible Pipeline
GNU Affero General Public License v3.0
3 stars 3 forks source link

Split_half assertion error #403

Closed F-said closed 2 years ago

F-said commented 2 years ago

Describe the bug @DMRoberts seems like a non-issue, but wanted to check with someone who understood the maths. Received this error once when running pytest on all tests

>       assert pytest.approx(
            [-.13, -.39, .20], abs=.02) == list(result.reliability)
E       assert approx([-0.13 ± 2.0e-02, -0.39 ± 2.0e-02, 0.2 ± 2.0e-02]) == [-0.1285093452134843, -0.3958049145181364, 0.2203004703336721]
E        +  where approx([-0.13 ± 2.0e-02, -0.39 ± 2.0e-02, 0.2 ± 2.0e-02]) = <function approx at 0x7fc523cec820>([-0.13, -0.39, 0.2], abs=0.02)
E        +    where <function approx at 0x7fc523cec820> = pytest.approx
E        +  and   [-0.1285093452134843, -0.3958049145181364, 0.2203004703336721] = list(distribution(mean=-0.1285093452134843, lower=-0.3958049145181364, upper=0.2203004703336721))
E        +    where distribution(mean=-0.1285093452134843, lower=-0.3958049145181364, upper=0.2203004703336721) = split_half_result(correlation=distribution(mean=-0.07417159119258929, lower=-0.24673116502025053, upper=0.12378520908484132), reliability=distribution(mean=-0.1285093452134843, lower=-0.3958049145181364, upper=0.2203004703336721)).reliability
DMRoberts commented 2 years ago

@F-said Since values involve random sampling, I had used the pytest.approx when comparing to the result from the SplitHalf R package. I think we may just have to increase the amount of tolerance for equality between the two values (The abs=.02 ) Another option might be to fix the random seed when testing, though since the comparison values are generated from an R package it might be trickier to match the two.