oslocyclotronlab / OCL_response_functions

response matrices used at ocl, mostly oscar and cactus
0 stars 0 forks source link

How to use new response functions for OSCAR with the response class in OMpy #1

Closed valamaria89 closed 4 years ago

valamaria89 commented 4 years ago

Hi,

I am struggling to understand how to use the new response function for OSCAR with the response class in OMpy.

response= om.Response(path="../ompy/OCL_response_functions/oscar2020/response_squarecut_50keV_10.000keV_efficiency.m")

Since it is a mama file I get the error: File is not a zip file I could use the matrix class but then I cannot use interpolate.

Am I missing something?

fzeiser commented 4 years ago

The key idea is that you should not need to do any response function interpolation any longer, but just load the response and send it to the unfolder (potentially rebinning). You may still do so of course, but then you will need a different format, which can be created from the results, see response_results.tar.gz

fzeiser commented 4 years ago

So -- why do you need to interpolate -- do you want to use a finer grid than the 10 keV bins you have currently?

fzeiser commented 4 years ago

I've also added the files needed for the interpolation now (eventhough I would generally discourage it). I haven't tested them, but it should work (once you unpack(?));

See #2 and https://github.com/oslocyclotronlab/OCL_response_functions/blob/dev/add_mama_interpolation/oscar2020/mama_export.tar.gz

valamaria89 commented 4 years ago

Thank you for quick answer!

I may have stated my question badly. So, when I import the new response matrix: response= om.Matrix(path="../ompy/OCL_response_functions/oscar2020/response_squarecut_50keV_10.000keV_efficiency.m")

It has the shape

`resp= response.values
print(resp.shape)

(995, 996)`

To use it with my raw matrix I need it to be the same shape as my E_gamma axis. So:

len(Eg) = 354 The shape of my response matrix needs to be (354, 354) to be able to unfold, in this case.

I could rebin the matrix, but since it has the shape (995, 996), so it is a (n x m)-matrix and not a (n x n)-matrix this is making it hard to rebin, in my head at least. I may be overthinking and there is an easy solution but I cannot see it. =)

fzeiser commented 4 years ago

Could you please elaborate on the problem for rebining? I don't see this. You might just use Matrix.rebin on any/both axis. Just provide the mids argument corresponding to your matrix (raw.Eg):

https://github.com/oslocyclotronlab/ompy/blob/71ddb33f806f169c3a4cbcdcdee44698d6829893/ompy/matrix.py#L657-L666

fzeiser commented 4 years ago

You might have to think about the efficiencies when rebining. If you eg would rebin with a new axis where you have only half of the bins on the incident energy (but same range), the efficiency would approximately double for each resulting bin. This is not what you want [so you'd have to multiply the values by the ratio of old to new binwidths.]

fzeiser commented 4 years ago

Btw: if you look at the values, not just the shape, you'll see that the difference between the axis is just the 10 MeV bin on the detected energy axis (Eg axis)

fzeiser commented 4 years ago

One more thought on this. It might be easier to find the closest simulated incident energy simulated in the response ("Ex"- axis) instead of rebinning "Ex". Then rebin Eg to the Eg axis of your raw matrix. This way you don't need to care about renormalizing the bin contents.

valamaria89 commented 4 years ago

I think I don't understand what you mean by: closest simulated incident energy simulated in the response? I first thought you meant that I could do this:

response = om.Matrix(path=".../ompy/OCL_response_functions/oscar2020/response_squarecut_50keV_10.000keV_efficiency.m")
response.cut('Ex', 0, 8500) #cut the response matrix so it matches the energies in the raw matrix
response.cut('Eg', 0, 8500) #cut the response matrix so it matches the energies in the raw matrix
response.rebin(axis='Eg',mids=mat.Eg) # Rebin Eg to the Eg axis of the raw matrix

But when I dot this response matrix with the raw matrix it results in the initial unfolded spectrum being the same shape as the Ex axis, so I am misunderstanding something here.

So I tried the other suggestion (as I understood it):

response = om.Matrix(path="../ompy/OCL_response_functions/oscar2020/response_squarecut_50keV_10.000keV_efficiency.m")
response.cut('Ex', 0, 8500) #cut the response matrix so it matches the energies in the raw matrix
response.cut('Eg', 0, 8500) #cut the response matrix so it matches the energies in the raw matrix
response.rebin(axis='Eg',mids=mat.Eg) # Rebin Eg to the Eg axis of the raw matrix
response.rebin(axis='Ex',mids=mat.Ex) # Rebin Ex to the Ex axis of the raw matrix

ratio = 86/796 # my raw matrix has dimensions (86,86) and my response (after cutting) has the shape (796) so this will be the ratio between the two

resp = response.values*ratio #multiplying the values of the response matrix with the ratio to correct for efficiency when re-binning

But this is not right either...So I am sorry, but I don't understand how to do this correctly.. :P

fzeiser commented 4 years ago

Please recall the convention of the response matrix here: The "Ex" axis are the incident gammas, on the "Eg" axis are the detected gammas. So in the standard approach we use you should make sure that the "Ex" axis of the response matrix has the same binning as "Eg" in the raw spectrum (and potentially that "Ex"="Eg" in the response).

what you mean by: closest simulated incident energy simulated in the response?

Assume that you have a raw matrix with three Eg bins, Eg = 500, 1000, 1444 keV. The response matrix R as it is set up now has the binning Ex=Eg= 50, 60, 70, ... 9900, 10000. So for the unfolding, instead of rebinning the response matrix, you might just want to create a new response R' picking the rows of the R closes to 500, 1000 and 1440. One you have create R' by selecting the closest simulated incident energy, you can rebin the Eg axis if you want a square matrix.

valamaria89 commented 4 years ago

Thank you for you patience! I think I understood everything now. I will post my code so you and other can see, and if anything is wrong don't hesitate to correct =)

# Response matrix 
response = om.Matrix(path="/home/vala/Documents/Master/MachineLearning/ompy/OCL_response_functions/oscar2020/response_squarecut_50keV_10.000keV_efficiency.m")
response.rebin(axis='Eg',mids=mat.Eg)  #Rebinning the Eg axis 

Ex_indices = response.indices_Ex(E=Eg)  #code from the matrix class

#Making a new R' matrix with the rows of old response close to the Eg bins from raw matrix:

Ex_resp = np.zeros(len(Ex_indices))
resp = np.zeros((len(Ex_indices),len(mat.Eg)))
for i in range(len(Ex_indices)):
    Ex_resp[i] = response.Ex[Ex_indices[i]]
    resp[i] = response.values[Ex_indices[i]]
print(resp.shape)
> (86, 86)
cecgresonant commented 4 years ago

Hi, sorry for joining the discussion so late.

what you mean by: closest simulated incident energy simulated in the response?

Assume that you have a raw matrix with three Eg bins, Eg = 500, 1000, 1444 keV. The response matrix R as it is set up now has the binning Ex=Eg= 50, 60, 70, ... 9900, 10000. So for the unfolding, instead of rebinning the response matrix, you might just want to create a new response R' picking the rows of the R closes to 500, 1000 and 1440. One you have create R' by selecting the closest simulated incident energy, you can rebin the Eg axis if you want a square matrix.

I am skeptical to the approach of grabbing the closest simulated energy, wouldn't this potentially lead to unnecessary errors? Although there is always of course some uncertainty in the extrapolation between the responses, my hunch is that this is at least more precise than just taking the closest one. For Nai detectors, taking the closest one might be sufficient, but I doubt this is good enough for OSCAR (or other high-resolution detectors).

fzeiser commented 4 years ago

Well, I don't say that it is my preferred approach; I recommend binning to the same binning as the response matrix.

But if you are for some reason not willing to do this, how far do you expect to be off when you're picking the closest and have a binwidth of 10 keV (that's the current grid for the simulated spectra)? I don't think that the error will be so large, but at worst potentially result in a 5 keV shift of your spectra.

cecgresonant commented 4 years ago

Maybe I just misunderstood, but if you are to unfold low-energy gammas, such as for the Eu source, then I would think one has to do an interpolation. The Compton contribution and photo peak contribution change quite rapidly below Eg = 1 MeV. Is the option of interpolation in OMpy removed?

fzeiser commented 4 years ago

No, it's not removed:

I've also added the files needed for the interpolation now (eventhough I would generally discourage it)

I still don't understand why one would have to do one thing or the other. Let's continue with this realistic example. Whether you should interpolate or not will be dependent on what your binning on the gamma rays is. What binning would you assume that you have for Eg in the matrix you want to unfold?

cecgresonant commented 4 years ago

It depends on the case of interest. If you have a high-resolution measurement (for example, with Ge Clover detectors), you would have typically 0.5 keV/channel for your gamma-ray spectrum. An example of a real case is the rotational band of 254No that Frank has worked with.

fzeiser commented 4 years ago

Ok, but we here talking about the new response for OSCAR where I don't have Clovers, but LaBr detectors. My comment does not apply to other response functions (also not to OSCAR 2017 modeled on a 200 keV grid)

fzeiser commented 4 years ago

(Off topic: For the case of Frank, one may want to simulate on a very fine grid then, eg. on a 0.5 keV grid. If the response changes to fast, that would be more reliable than the interpolation. If the response changes slowly, one can of course use a more coarse grid and interpolate. Remember to check the low energy interpolation though. There were some changes between OMpy and mama. I think to recall that OMpy does better for Eg<E_backscatter, but I'm not sure)

cecgresonant commented 4 years ago

Ok, but we here talking about the new response for OSCAR where I don't have Clovers, but LaBr detectors. My comment does not apply to other response functions (also not to OSCAR 2017 modeled on a 200 keV grid)

One could think of cases where one would like to use 1-keV/channel also for OSCAR, for heavy nuclei for example. My main worry was that the interpolation option was gone, but as long as it is still included, everything is fine :)

fzeiser commented 4 years ago

Note that once you get to a 1 keV/bin resolution, you wil probably have to care more also about the statistical fluctuations in the response matrix. The response is determined from a certain number of simulated events and it might not have been high enough for 1keV/bin resolution and simultaneously stating that the uncertainty per bin in the response is negligible (which we currently do; we just use one response...)

Now one could be tempted to simulate some few incident Eg's, say every 100 keV, with a very high number of events and use the interpolation. But then one should make sure to see that the interpolation works well on a keV level - which I doubt. The background is not just Compton, but also depends on the geometry/materials. I'm very uncertain on whether the interpolation will capture this correctly (although, of course, it might).

valamaria89 commented 4 years ago

Well, I don't say that it is my preferred approach; I recommend binning to the same binning as the response matrix.

But if you are for some reason not willing to do this, how far do you expect to be off when you're picking the closest and have a binwidth of 10 keV (that's the current grid for the simulated spectra)? I don't think that the error will be so large, but at worst potentially result in a 5 keV shift of your spectra.

Fabio: Do you think this is a problem when using FBU? If I have a binning of 10 keV the problem becomes a 996 dimensional problem and the sampling gets very slow (my computer crashes when I use more than 500 dimensions but this is maybe not the case when using a better suited computer of course(?)). In my case the binning of my raw matrix coincides nicely with the closest incident energies of the Eg_true- axis. So I do not see any shifts in my spectra. But it might become a problem? In other cases. So I think if using FBU one needs to be careful if one wishes to rebin the raw matrix? The best solution is of course to sample a 996 dimensional space, but this will give more fluctuations so to reduce these FBU recommends fewer bins. My rule of thumb has been to rebin so that I have at least 4 bins containing the full-energy peak.

fzeiser commented 4 years ago

Do you think this is a problem when using FBU?

The problem I had in mind does not depend on which unfolding method you choose, but how well you determined your response matrix. As you said, your bins coincided with the already simulated response matrix, so what exactly do you mean?

But it might become a problem? In other cases. So I think if using FBU one needs to be careful if one wishes to rebin the raw matrix?

This is not FBU specific, is it?

The best solution is of course to sample a 996 dimensional space, but this will give more fluctuations so to reduce these FBU recommends fewer bins. My rule of thumb has been to rebin so that I have at least 4 bins containing the full-energy peak.

Fluctuations are not a problem for me, that how life is? If you have less statistics per bin, you get larger fluctuations. The advantage of FBU might be that it gives you the correlations as well. But if you want to have less correlations (because eg. the first generation method does consider them) it might make sense to have larger bins. It is quite common to use 2 bins per FWHM in optics I think; but for unfolding itself this is just a matter of choice of presentation I think) -- But I don't think that this is relevant to this thread here.

valamaria89 commented 4 years ago

The problem I had in mind does not depend on which unfolding method you choose, but how well you determined your response matrix. As you said, your bins coincided with the already simulated response matrix, so what exactly do you mean?

When using the iterative method I do not see a problem with rebinning your raw matrix so it fits with the response matrix? In most cases (except maybe the cases mentioned in this thread) one can do as you say in the comment above:

Well, I don't say that it is my preferred approach; I recommend binning to the same binning as the response matrix.

So I would argue that it is indeed more specific to FBU where one will almost always rebin the raw matrix contra when using the iterative method where one can (in most cases) rebin the raw matrix according to the response. If I haven't misunderstood the discussion.

Fluctuations are not a problem for me, that how life is?

I do not understand this question.

If you have less statistics per bin, you get larger fluctuations. The advantage of FBU might be that it gives you the correlations as well. But if you want to have less correlations (because eg. the first generation method does consider them) it might make sense to have larger bins. It is quite common to use 2 bins per FWHM in optics I think; but for unfolding itself this is just a matter of choice of presentation I think) -- But I don't think that this is relevant to this thread here.

True. It was additional information to others reading the thread why one usually rebins the raw matrix when using FBU.

My question: if rebinning the raw matrix and the closest incident Eg_true energy will result in a worst case scenario 5 keV shift, is the only option to go back to the old method? With interpolation? Or is there a solution to this that is 'better' than using interpolation?