jrkerns / pylinac

An image analysis library for medical physics
MIT License
146 stars 94 forks source link

New beam parameters module #331

Open alanphys opened 3 years ago

alanphys commented 3 years ago

Hi

As I mentioned on the group I am developing a new beam parameters module from the old FlatSym. When I was validating BeamScheme (https://github.com/alanphys/BeamScheme) against pylinac I noticed some small discrepancies in the order of a pixel (0.3mm) in the field edges, centre, size and penumbra. At the time since BeamScheme gave correct results against other programs else I wrote it off to the interpolation scheme that pylinac uses. Now I have had to resolve these differences to correctly test the new module. These differences are due not so much to bugs, but to conceptual differences in the way the field centre and normalisation are defined.

I thought I would document these here to clarify my own thinking and to assist anyone else observing these discrepancies.

Profile centre

Pylinac uses the image centre which is defined as: x_center = self.shape[1] / 2 y_center = self.shape[0] / 2

For images with an even number of rows and columns the centre of the image actually lies between the four centre pixels. I.e. for an 8x8 image the image centre is the point (3.5,3.5). The function above returns (4,4) which it has to since it must return a pixel that exists and this can be any one of the four centre pixels.

For images with an odd number of rows and columns the situation is much easier. There is one central pixel. E.g. for a 9x9 image the centre pixel is (4,4) and the image centre returns the same.

For high resolution images such as EPID and film this difference is very small (approx 0.15 mm) but for low resolution images such as 2D arrays the difference can be in the order of 5 mm.

To get around this I defined a new SingleProfile property: def profile_center(self): """Returns the center index of the profile""" return (self.values.shape[0] - 1)/2

which returns the theoretical centre index of the profile.

Normalisation

The second issue I ran into was that of normalisation. Pylinac has two normalisation methods; both sides grounded to the profile minimum and normalised to the profile maximum, and each sided grounded to the side minimum and normalised to the profile maximum. Again the problem is with 2D arrays. Most array software doesn't ground the profile values as the assumption is that the array was zeroed before acquisition of the profile. Secondly, although the definition of FWHM is half of maximum most arrays take the CAX (middle) value.

For flat fields this is not a problem, but for heavily "horned" fields the discrepancy in the field size can be 1-2mm. To get around this I wrote a new method distance_to_dose which essentially does the same thing as the _penumbra_point but caters for the different normalisation and grounding modes.

Interpolation

The third issue is interpolation as has been discussed elsewhere here and on the group. Pylinac currently does not use interpolation. Even for high res images the difference can be in the order of 0.3 mm. For low res images it can be up to 5mm. Using interpolation in the FlatSym module does improve matters, but I find that I have to increase the interpolation factor up to 1000 before getting comparable results to other programs. The interpolation scheme is also a bit of a brute force approach, interpolating the entire profile when only one point is needed. Dose_to_distance uses inverse linear interpolation to get the theoretical index value corresponding to the threshold.

Resolution

The profile resolution (or lack of it) is also a problem. Pylinac takes the image resolution. My feeling is that the profile should have its own intrinsic resolution to reduce the dependence on the image so I included another property SingleProfile.dpmm

I realise that some of these things are personal preference, but I'll issue a pull request so you can look at what I have done so far and see if it will fit in the pylinac framework. Although I was initially thinking of modifying the existing FlatSym module the changes are greater than I anticipated and it proved easier to create a new module FieldParams and retain the existing FlatSym.

Regards Alan

alanphys commented 3 years ago

I see profile.py and flatsym.py has been extensively revised. I will have to re-evaluate before posting a pull request.

Regards Alan

alanphys commented 3 years ago

Some more notes on normalisation

From version 2.4 pylinac uses the find_peaks algorithm to determine field size, edges and centre. This algorithm is based on scipy.signal.find_peaks. The evaluation height that scipy.signal.find_peaks uses is calculated from the max value of the peak to the lowest contour of the peak, i.e. the smallest value that exists in both tails. This effectively grounds the profile and results in a slightly smaller field size than would be the case for an ungrounded profile.

Regards Alan

jrkerns commented 3 years ago

Hey Alan, Thanks for the detailed work. Anytime I can get rid of custom code and replace with 3rd party I consider a win. Unfortunately, that can mean slightly different behavior.

You have correctly identified the center pixel problem. In fact, I changed the center pixel definition in this release: https://github.com/jrkerns/pylinac/blob/master/pylinac/core/image.py#L271-L275 to offset by half a pixel. Some of the nuance comes when you want to get the value of the CAX, as you need an integer index.

Good point about normalization via max or CAX. I should think about that further.

Interpolation has been an interesting problem because half the time I needed the index (integer) and half the time I needed an interpolated float.

Pull requests welcome.

alanphys commented 3 years ago

Thanks James

I'm not sure what the issue with custom code is. You have no guarantee that third party code is correct, and if it isn't it is difficult debugging someone else's code. All the code was at some stage custom.

The problem with offsetting the centre by half a pixel is that this is only valid for images with an even number of pixels. Arrays quite often have an odd number of detectors, because this then places a detector at the centre of the array. In this case you will then be even more off.

I will bear in mind the point about interpolation.

Regards Alan

jrkerns commented 3 years ago

Hi Alan,

Good point about the center pixel. I'll add some logic to handle odd-numbered sizes.

I don't have issues with custom code per say. I wouldn't have done what I've done without having a passion for it. But when your code is used by other people you're now responsible for it. If I can offload some of that responsibility without compromising the goal of my code then I consider it a win. The more I code the more I look for ways to NOT write code.

Second, larger libraries like scipy, scikit-image, scikit-learn are used by orders of magnitude more people than my libraries and authored by people with more experience than me. This is in line with Linus' law, meaning that while any library can have bugs, the bigger libraries are less likely to have them and more likely to fix them.

Lastly, it means I don't have to write unit tests, which pretty much trumps everything else. 🎉🎉