Open SimonRit opened 2 years ago
I digged around while testing the filter on monte carlo generated images. 1) Using the filter with parameters SPR 0.5 (as suggested by Park et al http://dx.doi.org/10.1118/1.4923179 and adherent to my average(S)/average(P) ratio) AirThreshold 32000 and NonNegativity 10 I get consistent improvements when comparing reconstructions to the original CT. MAE is ~halved across all acquisitions. 2) For now, the only dubious part of RTK implementation is https://github.com/SimonRit/RTK/blob/37b2cee717cf328bac83c1efb602f793430ab19b/include/rtkBoellaardScatterCorrectionImageFilter.hxx#L90 . As the averaging of the value "behind the patient" is conducted of the whole projection pixel count. I believe it should be averaged on the number of pixels that are contributing to the sum (those > AirThreshold).
I'm still experimenting and I will regenerate my dataset asap by the latter change to the filter. Will let you know about this.
Ok, this seems to be slightly more complicated than I anticipated:
There are two papers from Boellaard in 1997 on the same topic and both agree on the flatness of scattered dose distribution on the image plane at large air gaps (>=50cm):
Nonetheless, It would be good to double-check on an Elekta database.
About the original report: Even though the user might have hit the nonnegativity constraint, I find it weird that no corrections were applied. In fact, each projection is corrected by (smallestValue - m_NonNegativityConstraintThreshold) in that case.
It's a bit hard to connect my knowledge of Marcel's implementation and Boellaard's Med Phys paper... But here is how I see it. In https://doi.org/10.1118/1.598066, equation (5) states that
$$S{i,j}=\mathrm{EDSF}\ast\left[P{i,j}\times\mathrm{NSPR}(T{i,j})\right].$$
Now, because the air gap is large, we can assume that
$$EDSF=1$$
and we obtain
$$S=\sum{i,j}P{i,j}\times\mathrm{NSPR}(T{i,j}).$$
For $\mathrm{NSPR}$, it is less obvious but I think the idea is to assume that the amount of scatter is proportional to the signal behind the patient. They end up with the following formula
$$S=\frac{\sum{i,j}\mathrm{m{\textunderscore}ScatterToPrimaryRatio}\times E{i,j}}{\sum_{i,j}1}$$
where m_ScatterToPrimaryRatio
is a constant parameter and $E{i,j}=S{i,j}+P{i,j}$ if $S{i,j}+P_{i,j}<$m_AirThreshold
and 0 otherwise.
After calculation, $S$ must be adjusted so that $\min{i,j} S{i,j}+P_{i,j}-S$ is not below m_NonNegativityConstraintThreshold
.
I completely agree with you on the formulation. The averaging on the whole projection, rather than just the thresholded pixel count, should account for the phantom size and relative geometry from a view. I assume that this way, m_ScatterToPrimaryRatio
is independent from field size and patient size, acting as NSPR.
My initial doubt was that scatter contribution, now correctly determined on pixel values less than AirThreshold, would result in really small corrections for the same m_ScatterToPrimaryRatio
. But from the figure below I can infer that increasing m_ScatterToPrimaryRatio
(around 0.5 in my tests) would be adherent to the behavior in the figures attached (from Boellaard MedPhys paper ), representing NSPR against SPR. Will check this is the case
Why not this change merged to master?
Only because I haven't had the time to integrate the relevant parts from the discussion in the code...
The issue has been reported on the mailing list https://public.kitware.com/pipermail/rtk-users/2021-November/011108.html