AFM-SPM / TopoStats

An AFM image analysis program to batch process data and obtain statistics from images
https://afm-spm.github.io/TopoStats/
GNU Lesser General Public License v3.0
57 stars 10 forks source link

Added 'pm' and 'mm' for very small and large scans #796

Closed MaxGamill-Sheffield closed 7 months ago

MaxGamill-Sheffield commented 7 months ago

2 line fix! (No issue made as it would take so much longer)

Code broke on very small scans when e.g. a 1x512px scan came through and thus the scale bar of one side was 'pm' not 'nm' or 'um'. Although these are removed due to the image size limits, they currently the rest of the code from running.

This fix adds no tests because these come from the channel data (a pySPM object) and the values identified here that could be tested are returned from a pySPM function and not a child object thus cannot be overwritten.

We are fine with this not being tested.

codecov[bot] commented 7 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (78506fe) 84.36% compared to head (de3ac4b) 84.73%. Report is 50 commits behind head on main.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #796 +/- ## ========================================== + Coverage 84.36% 84.73% +0.36% ========================================== Files 21 21 Lines 3134 3196 +62 ========================================== + Hits 2644 2708 +64 + Misses 490 488 -2 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

ns-rse commented 7 months ago

This looks like its related to the work @iobataya was doing in #659 and #779.

One thing I particularly liked about those PRs is it didn't hard-code values, instead parametrising values in the configuration which makes for a much more generalisable and flexible solution.

I look to write tests because when bugs are found they are scenarios that hadn't been anticipated and we want to make sure we cover them in future work/developments. They're relatively easy to construct (as you have the case that caused the error). Not doing so will often come back to bite someone else in the future (sometimes your future self).

MaxGamill-Sheffield commented 7 months ago

Yes, the work of @iobataya looks significantly more comprehensive and the PR will be greatly welcomed when it is finished.

However, for now this hot fix provides the solution we need to continue running datasets and avoid confusion / stoppage.

I also want to make it clear that the reason for the lack of tests is not because I disagree with your mantra above - I absolutely agree, but is instead because as mentioned before the function to be tested _spm_pixel_to_nm_scaling comes from the channel data (a pySPM object) of which the 'um' / 'pm' in the values are returned from the .pxs() function and not a .pxs object. This makes it very difficult to insert / overwrite these values to test all possible options we know of.

I've tried using the below test which fails because the function cannot be assigned a variable:

FIXME : Get this test working
@pytest.mark.parametrize(
    ("unit, x, y, expected_px2nm"),
    [
        ("um", 100, 100, 97.65625),
        ("nm", 50, 50, 0.048828125),
    ],
)
def test__spm_pixel_to_nm_scaling(
    load_scan_spm: LoadScans,
    spm_channel_data: pySPM.SPM.SPM_image,
    unit: str,
    x: int,
    y: int,
    expected_px2nm: float) -> None:
    """Test extraction of pixels to nanometer scaling."""
    spm_channel_data.pxs = [(x, unit), (y, unit)] # issue is that pxs() is a func that returns the data, not an object
    result = load_scan_spm._spm_pixel_to_nm_scaling(spm_channel_data)
    assert result == expected_px2nm

Along with the following conftest fixture:

@pytest.fixture()
def spm_channel_data() -> pySPM.SPM.SPM_image:
    """Instantiate channel data from a LoadScans object."""
    scan = pySPM.Bruker(RESOURCES / "minicircle.spm")
    return scan.get_channel("Height")

But can't make this swap of variables work.

Can you @ns-rse think of any ideas off the top of your head which don't involve deconstructing the pySPM class to create our own dummy object (the only real solution I can really think of but don't have the time to investigate), or adding the erroneous file (which wouldn't cover all possibilities)? If so I'll try to add a test for it

ns-rse commented 7 months ago

I think situations like this are where the mocking comes in to play in tests (there is a pytest-mock extension to fit with the framework we use).

You setup an object that mimics the class you want to test and then define what attributes you want it to have so that you don't have to have a generated a bona-fide real-world example to test the scenario you want to test.

MaxGamill-Sheffield commented 7 months ago

I think in test situations like this are where the mocking comes in to play in tests (there is a pytest-mock extension to fit with the framework we use).

Perfect! This is exactly what I needed - tests made! 👍