SimonBiggs / npgamma

Using numpy broadcasting to find the gamma index
GNU Affero General Public License v3.0
19 stars 7 forks source link

Local Gamma? #20

Open brenthuisman opened 5 years ago

brenthuisman commented 5 years ago

I can't find a way to compute local gamma. Do you have plans to implement this?

SimonBiggs commented 5 years ago

I might actually. There has been a little bit of work going into npgamma recently. I'll keep you posted.

SimonBiggs commented 5 years ago

When you say local gamma, is it okay to have it scaled by the local evaluation dose as opposed to the reference dose? The resulting difference in computation time is non-trivial.

brenthuisman commented 5 years ago

Great news! Hope you'll have time for this.

Regarding evaluation or reference: I suppose reference is what one expects. If the dose difference parameter (which now is a fraction or percentage rather than an absolute dose) is relative to the reference voxel_i, then one could swap the dosemaps to void the speed penalty. If I understand you well that is.

Hmm, on second thought, the gamma analysis of course also changes when swapping reference and evaluation dosemap, so I think the dosediff should be on the reference, not evaluation. What's roughly the difference in performance? Since its probably only a sublte difference in the final gamma map, perhaps it can be a parameter?

Simon Biggs notifications@github.com schreef op 16 november 2018 19:11:48 CET:

When you say local gamma, is it okay to have it scaled by the evaluation dose as opposed to the reference dose? The resulting difference in computation time is non-trivial.

-- You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub: https://github.com/SimonBiggs/npgamma/issues/20#issuecomment-439479113

SimonBiggs commented 5 years ago

Yeah I agree, It should be a parameter. I'll see if I can do that.

Also, it's not ready yet, but I am planning on having all future work on the gamma algorithm be done within the pymedphys project:

http://pymedphys.com

Once pymedphys.gamma has reached feature parity with npgamma I'll release npgamma 0.8.0 which will have a deprecation notice and mention that pymedphys should be used instead. Any implementation of local gamma will likely only be within pymedphys.gamma. I have also created two extra methods of determining gamma which can be used to compare results across different implementations if needed.

There is also some work going into improving the coordinate export from DICOM (https://gitlab.com/pymedphys/pymedphys/merge_requests/45/diffs), and making the API simpler for use with DICOM files:

from pymedphys.gamma import gamma_dcm
gamma_options = {
    'dcm_ref_filepath': "a_filepath",
    'dcm_eval_filepath': "a_filepath",
    'dose_percent_threshold': 3,
    'distance_mm_threshold': 3,
    'interp_fraction': 5,
    'max_gamma': 1.1
}
gamma_dcm(**gamma_options)

If you have any feedback on the new API, now is a great time to give it before it gets locked in.

Cheers, Simon

brenthuisman commented 5 years ago

That looks like a great package! Next week im going to have a closer look!

In terms of API/function signature I hope it will remain possible to supply numpy arrays. I work with a variety of image formats (unfortunately not often dicom), and I have found it most productive to stay close to the numpy array as voxelstore, for its wide range of functionality. I see that you might assume a dicom-only world in the new library, is that correct?

Simon Biggs notifications@github.com schreef op 16 november 2018 21:22:18 CET:

Yeah I agree, It should be a parameter. I'll see if I can do that.

Also, it's not ready yet, but I am planning on having all future work the gamma algorithm be done within the pymedphys project:

http://pymedphys.com

Once pymedphys.gamma has reached feature parity with npgamma I'll release npgamma 0.8.0 which will have a deprecation notice and mention that pymedphys should be used instead. Any implementation of local gamma will likely only be within pymedphys.gamma. I have also created two extra methods of determining gamma which can be used to compare results across different implementations if needed.

There is also some work going into improving the coodinate export from dicom, and making the API simpler for use with DICOM files

from pymedphys.gamma import gamma_dcm
gamma_options = {
   'dcm_ref_filepath': "a_filepath",
   'dcm_eval_filepath': "a_filepath",
   'dose_percent_threshold': 3,
   'distance_mm_threshold': 3,
   'interp_fraction': 5,
   'max_gamma': 1.1
}
gamma_dcm(**gamma_options)

If you have any feedback on the new API, now is a great time to give it before it gets locked in.

Cheers, Simon

-- You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub: https://github.com/SimonBiggs/npgamma/issues/20#issuecomment-439515951

SimonBiggs commented 5 years ago

The underlying calc_gamma is still usable. It has been renamed to gamma_shell (symbolising the method used). It will be able to used directly, and it will take numpy arrays.

I don't have any plans to change the API for that function (apart from potentially adding a local gamma flag). It should work just the same as calc_gamma from inside npgamma.

On Sat., 17 Nov. 2018, 8:25 am Brent Huisman, notifications@github.com wrote:

That looks like a great package! Next week im going to have a closer look!

In terms of API/function signature I hope it will remain possible to supply numpy arrays. I work with a variety of image formats (unfortunately not often dicom), and I have found it most productive to stay close to the numpy array as voxelstore, for its wide range of functionality. I see that you might assume a dicom-only world in the new library, is that correct?

Simon Biggs notifications@github.com schreef op 16 november 2018 21:22:18 CET:

Yeah I agree, It should be a parameter. I'll see if I can do that.

Also, it's not ready yet, but I am planning on having all future work the gamma algorithm be done within the pymedphys project:

http://pymedphys.com

Once pymedphys.gamma has reached feature parity with npgamma I'll release npgamma 0.8.0 which will have a deprecation notice and mention that pymedphys should be used instead. Any implementation of local gamma will likely only be within pymedphys.gamma. I have also created two extra methods of determining gamma which can be used to compare results across different implementations if needed.

There is also some work going into improving the coodinate export from dicom, and making the API simpler for use with DICOM files

from pymedphys.gamma import gamma_dcm
gamma_options = {
'dcm_ref_filepath': "a_filepath",
'dcm_eval_filepath': "a_filepath",
'dose_percent_threshold': 3,
'distance_mm_threshold': 3,
'interp_fraction': 5,
'max_gamma': 1.1
}
gamma_dcm(**gamma_options)

If you have any feedback on the new API, now is a great time to give it before it gets locked in.

Cheers, Simon

-- You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub: https://github.com/SimonBiggs/npgamma/issues/20#issuecomment-439515951

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/SimonBiggs/npgamma/issues/20#issuecomment-439533101, or mute the thread https://github.com/notifications/unsubscribe-auth/AGQVe4Dkr8adCAPceBtw8ufeEBEAJA_Bks5uvy1cgaJpZM4YmM4R .

SimonBiggs commented 5 years ago

@brenthuisman so I have just today implemented local gamma calculation within pymedphys.gamma. Would you be in a position to install the bleeding edge version of pymedphys using the following instructions:

https://pymedphys.com/en/latest/getting-started/installation.html#bleeding-edge-with-gitlab

Once you've installed it, the usage of the latest version of gamma_shell is described within: https://pymedphys.com/en/latest/user/gamma.html#available-functions

If you could take it for a spin, setting local_gamma to True. It has currently had next to no validation. If you are in a position to run your own validation let me know and I can include some of the validations you undergo within the automatic testing suite.

Cheers, Simon

brenthuisman commented 5 years ago

That's fast! I've tested the latest version (as per your instructions) a littlebit.

Let me know if I can help you further.

SimonBiggs commented 5 years ago

Are you able to provide me those images?

On Thu., 22 Nov. 2018, 3:43 am Brent Huisman, notifications@github.com wrote:

I've tested the latest version (as per your instructions) a littlebit.

  • First of all, I notice a serious performance regression (I also noticed that your build instructions forced an older version of Python (3.6), perhaps related?).
  • Secondly, the final outputs are quite different from the latest npgamma (0.7). For a set of images, npgamma gives a gammamean/passrate of 0.039/1.0, and pymedphys.gamma 0.092/1.0.
  • Thirdly, for reference, our tool gives 1.93/0.404. Is there a scaling issue here? I provide everything to pymedphys.gamma in units of mm and the dose criterion in percents.
  • Local gamma I have not yet compared.

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/SimonBiggs/npgamma/issues/20#issuecomment-440733167, or mute the thread https://github.com/notifications/unsubscribe-auth/AGQVe_3ZWam0968rdwevPUUM2TWSxbVTks5uxYKhgaJpZM4YmM4R .

SimonBiggs commented 5 years ago

And can you keep an eye on your RAM usage and let me know if your RAM is maxing out?

On Thu., 22 Nov. 2018, 6:31 am Simon Biggs, me@simonbiggs.net wrote:

Are you able to provide me those images?

On Thu., 22 Nov. 2018, 3:43 am Brent Huisman, notifications@github.com wrote:

I've tested the latest version (as per your instructions) a littlebit.

  • First of all, I notice a serious performance regression (I also noticed that your build instructions forced an older version of Python (3.6), perhaps related?).
  • Secondly, the final outputs are quite different from the latest npgamma (0.7). For a set of images, npgamma gives a gammamean/passrate of 0.039/1.0, and pymedphys.gamma 0.092/1.0.
  • Thirdly, for reference, our tool gives 1.93/0.404. Is there a scaling issue here? I provide everything to pymedphys.gamma in units of mm and the dose criterion in percents.
  • Local gamma I have not yet compared.

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/SimonBiggs/npgamma/issues/20#issuecomment-440733167, or mute the thread https://github.com/notifications/unsubscribe-auth/AGQVe_3ZWam0968rdwevPUUM2TWSxbVTks5uxYKhgaJpZM4YmM4R .

SimonBiggs commented 5 years ago

Also, some of the defaults have changed between the versions. The lower dose cutoff can have a significant impact on the calculated gamma.

In npgamma it is set to 0, in gamma_shell it defaults to 20%. Another change is that this lower dose cutoff now masks based on both the reference and the evaluation. If either dose grid interpolated to the evaluation coordinate is less than this cutoff that evaluation point is not used. Set the lower dose cutoff parameter to 0 to get the same defaults as npgamma.

On Thu., 22 Nov. 2018, 7:10 am Simon Biggs, me@simonbiggs.net wrote:

And can you keep an eye on your RAM usage and let me know if your RAM is maxing out?

On Thu., 22 Nov. 2018, 6:31 am Simon Biggs, me@simonbiggs.net wrote:

Are you able to provide me those images?

On Thu., 22 Nov. 2018, 3:43 am Brent Huisman, notifications@github.com wrote:

I've tested the latest version (as per your instructions) a littlebit.

  • First of all, I notice a serious performance regression (I also noticed that your build instructions forced an older version of Python (3.6), perhaps related?).
  • Secondly, the final outputs are quite different from the latest npgamma (0.7). For a set of images, npgamma gives a gammamean/passrate of 0.039/1.0, and pymedphys.gamma 0.092/1.0.
  • Thirdly, for reference, our tool gives 1.93/0.404. Is there a scaling issue here? I provide everything to pymedphys.gamma in units of mm and the dose criterion in percents.
  • Local gamma I have not yet compared.

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/SimonBiggs/npgamma/issues/20#issuecomment-440733167, or mute the thread https://github.com/notifications/unsubscribe-auth/AGQVe_3ZWam0968rdwevPUUM2TWSxbVTks5uxYKhgaJpZM4YmM4R .

brenthuisman commented 5 years ago

I made sure to set all parameters to the same value. So, since the npgamma package was the same version, I suppose the performance regression came from the downgrade of Python and perhaps Numpy (I understand 3.7 had some performance improvements). Ram usage is well within bounds (my machine has 48GB, I see no more than ~8GB used up by Python processes during computation). Note that npgamma is maxing out all 32 threads on my machine (even if I set the thread number to 8 or 16), while pymedphys.gamma doesn't saturate at all. Seeing as the threadnumber parameter is gone, I figure you do some heuristics to determine an optimum amount?

Sending you the image is probably going to be problematic (it's active clinical data). Performance aside, my main concern the how far off your code is from ours, which is here used in clinical QA. I suspect a normalization issue, as you're average is close to 3% of our tool, which is also the dose difference criterion I supplied. Maybe I am using your tool wrong, and I should supply an absolute dose (e.g. 3% of D_{max})?

SimonBiggs commented 5 years ago

Is the performance loss similar to the loss in thread usage? I thought numpy was able to spread its calculations over the available threads so I had actually removed that part of the code. But I only have 4 threads, so I wasn't able to see the loss of speed. Looks like it would be worthwhile putting that threading code back in there.

Makes sense regarding the images. I'm very keen to get to the bottom of differences. Would you be able to potentially make a synthetic pair of image datasets that replicates the issue? I would like to get to the bottom of any gamma differences.

Cheers, Simon

On Thu, 22 Nov 2018 at 21:08 Brent Huisman notifications@github.com wrote:

I made sure to set all parameters to the same value. So, since the npgamma package was the same version, I suppose the performance regression came from the downgrade of Python and perhaps Numpy (I understand 3.7 had some performance improvements). Ram usage is well within bounds (my machine has 48GB, I see no more than ~8GB used up by Python processes during computation). Note that npgamma is maxing out all 32 threads on my machine (even if I set the thread number to 8 or 16), while pymedphys.gamma doesn't saturate at all. Seeing as the threadnumber parameter is gone, I figure you do some heuristics to determine an optimum amount?

Sending you the image is probably going to be problematic (it's active clinical data). Performance aside, my main concern the how far off your code is from ours, which is here used in clinical QA. I suspect a normalization issue, as you're average is close to 3% of our tool, which is also the dose difference criterion I supplied. Maybe I am using your tool wrong, and I should supply an absolute dose (e.g. 3% of D_{max})?

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/SimonBiggs/npgamma/issues/20#issuecomment-440978883, or mute the thread https://github.com/notifications/unsubscribe-auth/AGQVe3KuAeqjyKyDYwLvre7EvvrRg2Siks5uxneIgaJpZM4YmM4R .

brenthuisman commented 5 years ago

So the loss of performance came from the downgrade of Python and other packages (npgamma version was kept the same) due to your build/install recipe, not the change to pymedphys.gamma.

I notice the memory consumption is quite sensitive to the parameters: a more fine-grained comparison leads to me maxing out the 48GB easily, and then slowing down considerably.

Some other questions:

brenthuisman commented 5 years ago

I see the difference in scale of the final gamma map was my own doing, I was rescaling the dose difference parameters myself. I removed that (such that I'm now inputting 3), but it's been running over an hour now and still no output. Memory consumption doesn't go over 16GB though, so that's good. Other params are (after doses and coords): 3, 3, 10, 10, 10, False, None.

Update: took over 2 hours to finish, generally low CPU occupancy. Result is now mean,passrate,median,99percentile 1.5388259 42.947640594699416 1.1427876949310303 5.510002079010002

Our tool: γ-mean=1.93 γ-1%=13.97 γ-pass=40.4

Our tool doesnt clip the dose at 10pc (but in the interest of time, I did that) and may explain the difference.

Any tips for tuning?

SimonBiggs commented 5 years ago

One thing to speed things up is set the max gamma to 1.1. If you're only testing whether or not it is less than 1 that means it will stop once it goes beyond 1.1.

Something else that might help, if your repull the master from GitLab I have put another parameter in there, "skip_when_passed". That means if it finds a value less than one it stops searching at that eval point. It means the precise gamma value won't be correct but the < 1 statistic will still be the same.

On Fri., 23 Nov. 2018, 3:30 am Brent Huisman, notifications@github.com wrote:

I see the difference in scale of the final gamma map was my own doing, I was rescaling the dose difference parameters myself. I removed that (such that I'm now inputting 3), but it's been running over an hour now and still no output. Memory consumption doesn't go over 16GB though, so that's good. Other params are (after doses and coords): 3, 3, 10, 10, 10, False, None.

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/SimonBiggs/npgamma/issues/20#issuecomment-441078940, or mute the thread https://github.com/notifications/unsubscribe-auth/AGQVe7Lbw4ZWDSxzk_TclDR8IznZwMbsks5uxtE9gaJpZM4YmM4R .

brenthuisman commented 5 years ago

Did a bit more tweaking, here my report:

old: "mean,passrate,median,99percentile" 1.7102355 36.93923723335488 1.2136543989181519 7.1313693237304685 settings: dta, self.max()dd/100., 10, dta / 3, dta2, np.inf, 16) TotalSeconds : 22.0924739

new: 1.6233622 36.93923723335488 1.2136543989181519 5.5191084861755355 settings: dd, dta, 10, dta, 10, False, None, True TotalSeconds : 173.2929992

local gamma: 1.6734046 34.95798319327731 1.333335518836975 5.678065719604491

ours: γ-mean=1.93 γ-1%=13.97 γ-pass=40.4


Passrate and median are identical, so thats good news. Switching skip_when_passed predictably takes a bit off the top of the distribution. Still, with what I think are very similar settings, a slowdown (no longer 2hrs though!). I think it may be searching much further (at least up to 20mm, while with npgamma I restrict it to dta*3 (=6mm).

Local gamma results produce larger high value gamma indices, so when using local gamma, I think the skip_when_passed must be forced to off. However, results are nearly identical to global gamma with skip_when_passed set to either.

SimonBiggs commented 5 years ago

The gamma distance searching setting is now set using "max_gamma". Setting that parameter to 3 has the same effect as when previously you would write 3*dta into the max search distance parameter. If you're just looking at pass rates initially then set max_gamma to 1.1.

Let me know if that makes a difference.

Also, could you plot the doses slice by slice and then also plot the gammer and take a look?

Cheers, Simon

On Sat., 24 Nov. 2018, 2:44 am Brent Huisman, notifications@github.com wrote:

Did a bit more tweaking, here my report:

old: "mean,passrate,median,99percentile" 1.7102355 36.93923723335488 1.2136543989181519 7.1313693237304685 settings: dta, self.max()dd/100., 10, dta / 3, dta2, np.inf, 16) TotalSeconds : 22.0924739

new: 1.6233622 36.93923723335488 1.2136543989181519 5.5191084861755355 settings: dd, dta, 10, dta, 10, False, None, True TotalSeconds : 173.2929992

local gamma: 1.6734046 34.95798319327731 1.333335518836975 5.678065719604491

ours: γ-mean=1.93 γ-1%=13.97 γ-pass=40.4

Passrate and median are identical, so thats good news. Switching skip_when_passed predictably takes a bit off the top of the distribution. Still, with what I think are very similar settings, a slowdown (no longer 2hrs though!). I think it may be searching much further (at least up to 20mm, while with npgamma I restrict it to dta*3 (=6mm).

Local gamma results produce larger high value gamma indices, so when using local gamma, I think the skip_when_passed must be forced to off. However, results are nearly identical to global gamma with skip_when_passed set to either.

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/SimonBiggs/npgamma/issues/20#issuecomment-441271977, or mute the thread https://github.com/notifications/unsubscribe-auth/AGQVe072snGJoofhFbiTmFxG-C3I0qcXks5uyBf1gaJpZM4YmM4R .

SimonBiggs commented 5 years ago

@brenthuisman I wanted to alert you to an embarassing mistake on my behalf. For as long as npgamma has been in existence I have held the misconception discribed at the following issue:

https://gitlab.com/pymedphys/pymedphys/issues/4

This will essentially mean that evaluation and reference are swapped. But for local gamma it will mean that the wrong grid is being used for normalisation. I'll sort it out within pymedphys in the next couple of days.