Closed keithoffer closed 7 months ago
I'm starting to build up a suite of regression tests using pytest that I plan to push upstream in the near future. Would you be willing to share the images that are failing so I could add them?
Thanks for the heads up. I can reproduce this with the synthetic image generator. The test images are not publicly available since most test images now come from RadMachine customers and are thus customer data.
We changed the profile algorithm for better detection for Halcyon images which go right up to the edge of the image. I'll dig in and get back to you.
Your idea of using the open field appears to be a 1-liner and work pretty well; thanks! Some of the test datasets do change slightly (~0.1-0.5%) which isn't ideal. Despite what it seems, we try not to change the results if we can help it 🫠, however, this is generally more stable from these test sets so far. It's also possible a gaussian filter would help to nudge it above 50% for your specific case, although of course that will slightly adjust the ROI values as well. Are you guys using pylinac in a script or the RadMachine UI?
We still use a script. I'll have a play this week with some of the images that cause the issue with the RadMachine UI and filters and see if it helps.
@crcrewso - attached an example DRGS image set if you want to use it. Looks like there was some weird panel calibration thing going on as well as the two halves of the profile don't meet up, but it causes the issue so should be good enough. Or you can generate synthetic images like James did. DRGS_example.zip
@crcrewso - attached an example DRGS image set if you want to use it. Looks like there was some weird panel calibration thing going on as well as the two halves of the profile don't meet up, but it causes the issue so should be good enough. Or you can generate synthetic images like James did. DRGS_example.zip
Thank you so much. I'm adding it to our suite now.
A hotfix to pylinac and RadMachine will be published today.
Closed in v3.21.1
Describe the bug Running some previous data through PyLinac 3.21 as part of my monthly regression testing for upgrades, I noticed a change in some VMAT results. Looking at the analysis, it seems like this is a resurfacing of an old bug (issue #391) due to the switch to using FWHM for the VMAT profile edge detection. That issue was the reason for switching to inflection based field edge detection from FWHM. Basically if any of the valleys in the Ling profile go below 50%, it's detected as the edge of the profile and then all ROI's are placed in the incorrect location.
For a more precise call chain, calling VMATBase.analyze() calls VMATBase._calculate_segment_centers() calls VMATBase._median_profiles() which is where the changed code causes the difference.
Some tests were added to catch the issue (see the issue for the exact commits) but I’ve never figured out how to download the test files to be able to see why they aren’t failing now that FWHM is the field edge detection method again. I can provide some DICOM images that showcase the problem if required.
To keep the current system and avoid the issue, maybe the regions of interest could be located using the profile from only the open image?
To Reproduce Run a DRGS or DRMLC analysis on an image where any of the valleys in the dynamic profile go below 50% of the height. The profile edges will be incorrectly located, which in ROI’s will be offset
Expected behavior Ideally the ROI’s are located correctly as to not cause incorrect analysis.
Screenshots