GRIFFINCollaboration / GRSISort

A lean, mean, sorting machine.
MIT License
22 stars 54 forks source link

Angular correlation class #547

Closed SmithJK closed 8 years ago

SmithJK commented 9 years ago

I'm going to spend a week, most likely after the EEC deadline, writing an angular correlations class. Some of the things I hope to incorporate:

This will mainly be based on GRIFFIN, since that's what we've been working with so far, but I will try to keep it as general as possible.

Leave any suggestions, either for structure or features, below.

r3dunlop commented 9 years ago

How do you plan on putting the event creation in? Will this be a function that you call using a script?

Also we were able to speed that stuff up, so I might just talk to you about this in Santa Fe.

On Oct 26, 2015, at 12:20 AM, SmithJK notifications@github.com wrote:

I'm going to spend a week, most likely after the EEC deadline, writing an angular correlations class. Some of the things I hope to incorporate:

event mixing technique storage of prompt and event-mixed histograms as well as divided histograms background subtraction (time-random and energy-dependent) angular index map conversion of histogram to graph Legendre plotting This will mainly be based on GRIFFIN, since that's what we've been working with so far, but I will try to keep it as general as possible.

Leave any suggestions, either for structure or features, below.

— Reply to this email directly or view it on GitHub.

AndrewMac1 commented 9 years ago

We could also think about putting crystal pair efficiency corrections in here. Although they may be small (~1%) it wouldn't hurt for the sake of completeness.

SmithJK commented 9 years ago

@r3dunlop I think at this point, I may leave the actual creation of the event-mixed histograms to the user. I will probably assume that the user has created a certain number of prompt and event-mixed histograms.

@AndrewMac1, I don't know what you mean by crystal pair efficiency corrections.

AndrewMac1 commented 9 years ago

When you look at the efficiency of detecting a gamma in a crystal not all crystals are equally efficient, when detecting both gammas at each angle there is a small correction that is seen (~1-1.5%). This isn't a huge correction but if for some reason there we use some we lose efficiency in some crystals/detectors it may cause an issue or skew in the correlation. I don't know if this is easy to add but all it would require would be a relative efficiency for each crystal then make a 64x64 grid that can be sorted into angle pairs and summed. These corrections would then be included when correcting for the weighting factors a well.

----- Original Message ----- From: "SmithJK" notifications@github.com To: "GRIFFINCollaboration/GRSISort" GRSISort@noreply.github.com Cc: "AndrewMac1" amacle02@uoguelph.ca Sent: Tuesday, November 3, 2015 12:20:53 PM Subject: Re: [GRSISort] Angular correlation class (#547)

@r3dunlop I think at this point, I may leave the actual creation of the event-mixed histograms to the user. I will probably assume that the user has created a certain number of prompt and event-mixed histograms.

@AndrewMac1, I don't know what you mean by crystal pair efficiency corrections.


Reply to this email directly or view it on GitHub: https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-153423579

SmithJK commented 9 years ago

How is this different than what is included in the event-mixing technique?

AndrewMac1 commented 9 years ago

Yeah true, it should be encompassed in there you are right. No need for this if event mixing is done externally somewhere.

----- Original Message ----- From: "SmithJK" notifications@github.com To: "GRIFFINCollaboration/GRSISort" GRSISort@noreply.github.com Cc: "AndrewMac1" amacle02@uoguelph.ca Sent: Tuesday, November 3, 2015 1:24:45 PM Subject: Re: [GRSISort] Angular correlation class (#547)

How is this different than what is included in the event-mixing technique?


Reply to this email directly or view it on GitHub: https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-153445265

r3dunlop commented 9 years ago

So is this guy just going to be a fitting class? Similar to my TDecay? Because if it is, we should talk. I think I understand the best way to do these things now and the more of a plan you have for how the class should behave before going in, the better it turns out. This is especially true for the fitters. TDecay was one I had thought about quite a bit before implementing whereas TPeak (and friends) could have used a little more thought.

On Nov 3, 2015, at 1:41 PM, AndrewMac1 notifications@github.com wrote:

Yeah true, it should be encompassed in there you are right. No need for this if event mixing is done externally somewhere.

----- Original Message ----- From: "SmithJK" notifications@github.com To: "GRIFFINCollaboration/GRSISort" GRSISort@noreply.github.com Cc: "AndrewMac1" amacle02@uoguelph.ca Sent: Tuesday, November 3, 2015 1:24:45 PM Subject: Re: [GRSISort] Angular correlation class (#547)

How is this different than what is included in the event-mixing technique?


Reply to this email directly or view it on GitHub: https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-153445265 — Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-153449652.

SmithJK commented 9 years ago

I want this class to have the functions to create an angular correlation plot from those histograms. I'm thinking that an instance of the class would (at minimum) store the prompt spectrum, an event-mixed spectrum, and a weighting factors spectrum. It should have a function to create the divided spectrum. It should have a function to convert these angular index histograms to TGraphAsymmErrors graphs as a function of cosine. It should have a function to fold the angular correlations and to specify grouping. It should have Legendre Polynomial fitting - at least purely theoretical, and perhaps convoluted with the simulation, if we can get that to work.

In short, I want to create a class that will start to standardize the techniques and code that we are using, so that we all work together to get bugs out, and to facilitate future angular correlation measurements.

@AndrewMac1 and I are both starting with 52 prompt gamma-gamma histograms and 52 background (or event-mixed) gamma-gamma histograms. I don't care so much about storing all of these histograms in an instance of the class, but it might be practical to make a separate class for those histograms (derived from TH2) just so we know how to interface with them. This is something I'm not sure about yet.

VinzenzBildstein commented 9 years ago

I think this is a good idea, and maybe it's not a bad idea to include a theoretical correction for the individual crystal efficiencies (as Andrew suggested). That way one could use that correction to correct the event-mixed/background spectrum and see whether the result is truly flat. This could be an important test to see that nothing else is wrong.

r3dunlop commented 9 years ago

Ok. This sounds like more of a namespace than a class?

ie TAngularCorrelation::Group(hist1); That is unless there are going to be actual data members stored and it makes sense to create an “instance”. You could of course do something similar to what you said and make it a class that you can wrap all of the TH1’s into to make it easier. Deriving from TH2 can be a bit of a monster as you would actually have to derive from TH2F, and TH2D, and TH2I and…. or standardize a specific data type to use for angular correlations. This is because TH2 is abstract, and doesn’t hold any data itself. Choosing a specific type also doesn’t work well as certain functions always return F and certain ones always return D.

The question I suppose is “what is an angular correlation?”. My answer would be: “a graph of data-points at various angles which shows a pattern in those angles.” Maybe an angular correlation inherits from TGraphErrors instead? and you use your histograms to build that graph.

One other comment: I’m not doing angular correlations so perhaps I shouldn’t have a say in this, but if this is happening, I would personally like to see the death of the angular indices. The reason is, with the discovery of THnSparse, and the fact that it seems to run much better on most of our matrices than a standard THn, we should consider moving these analyses over to the THnSparse. You could then bin in your choice of cos(theta) or theta without using extra memory. This has the advantage that you may not have to consider regroupings when moving the array in and out of high-efficiency mode. You could group based on proximity of data points in “real space”.

On Nov 3, 2015, at 3:01 PM, SmithJK notifications@github.com wrote:

I want this class to have the functions to create an angular correlation plot from those histograms. I'm thinking that an instance of the class would (at minimum) store the prompt spectrum, an event-mixed spectrum, and a weighting factors spectrum. It should have a function to create the divided spectrum. It should have a function to convert these angular index histograms to TGraphAsymmErrors graphs as a function of cosine. It should have a function to fold the angular correlations and to specify grouping. It should have Legendre Polynomial fitting - at least purely theoretical, and perhaps convoluted with the simulation, if we can get that to work.

In short, I want to create a class that will start to standardize the techniques and code that we are using, so that we all work together to get bugs out, and to facilitate future angular correlation measurements.

@AndrewMac1 https://github.com/AndrewMac1 and I are both starting with 52 prompt gamma-gamma histograms and 52 background (or event-mixed) gamma-gamma histograms. I don't care so much about storing all of these histograms in an instance of the class, but it might be practical to make a separate class for those histograms (derived from TH2) just so we know how to interface with them. This is something I'm not sure about yet.

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-153471011.

SmithJK commented 9 years ago

Okay, I can see the usefulness of having a space for those crystal-pair efficiency corrections. While I hope that after doing this a few times we don't need to run this check, if we build the infrastructure, it would provide a good check. In addition, it's possible that in some cases, the event-mixing will significantly increase the error in the bins. In that case, you might be better off doing an efficiency correction to the weighting histogram and using that to divide. This would be the more traditional way of correcting the angular correlations.

"you may not have to consider regroupings when moving the array in and out of high-efficiency mode": I may be misunderstanding what you're saying, I would expect the geometric symmetries to be constant regardless of distance of detector from implantation point.

I think the advantage of the angular indices is precisely the ability to set the angles later in the analysis process. Andrew has shown that the most likely point for energy deposition changes as a function of energy. This would mean the theta value for a pair changes as a function of energy. Angular indices allow us to use the same set of histograms for all energies and set the theta value after the energies for the cascade have been chosen.

I am very open to using THnSparse in this. Even if we continue using angular indices, it would be helpful to be able to store two three-dimensional arrays rather than 52+52 two-dimensional arrays.

AndrewMac1 commented 9 years ago

If we move back the detectors then the angles will change as well. I have looked at this, albeit a long time ago, and they don't all change in the same direction (meaning all get smaller or larger). I don't know how much it will necessarily change the grouping (probably not at all) but it could, which is what Ryan was saying. Since we have always been running in high efficiency mode we have never really needed to do otherwise but it could be an issue if we run something with a lot of beam and decided we wanted to pull back the clovers.

----- Original Message ----- From: "SmithJK" notifications@github.com To: "GRIFFINCollaboration/GRSISort" GRSISort@noreply.github.com Cc: "AndrewMac1" amacle02@uoguelph.ca Sent: Tuesday, November 3, 2015 3:59:21 PM Subject: Re: [GRSISort] Angular correlation class (#547)

Okay, I can see the usefulness of having a space for those crystal-pair efficiency corrections. While I hope that after doing this a few times we don't need to run this check, if we build the infrastructure, it would provide a good check. In addition, it's possible that in some cases, the event-mixing will significantly increase the error in the bins. In that case, you might be better off doing an efficiency correction to the weighting histogram and using that to divide. This would be the more traditional way of correcting the angular correlations.

"you may not have to consider regroupings when moving the array in and out of high-efficiency mode": I may be misunderstanding what you're saying, I would expect the geometric symmetries to be constant regardless of distance of detector from implantation point.

I think the advantage of the angular indices is precisely the ability to set the angles later in the analysis process. Andrew has shown that the most likely point for energy deposition changes as a function of energy. This would mean the theta value for a pair changes as a function of energy. Angular indices allow us to use the same set of histograms for all energies and set the theta value after the energies for the cascade have been chosen.

I am very open to using THnSparse in this. Even if we continue using angular indices, it would be helpful to be able to store two three-dimensional arrays rather than 52+52 two-dimensional arrays.


Reply to this email directly or view it on GitHub: https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-153486310

r3dunlop commented 9 years ago

"I would expect the geometric symmetries to be constant regardless of distance of detector from implantation point.” Has anyone shown this yet? When you pull the detectors back into suppressed mode, some angles are going to grow faster than others as the crystals do not truly move along r (the clovers do). For instance, the crystal-to-crystal within a clover distance will never change, but the crystal to nearest-neighbour-clover crystals will change. This could in principle change your groupings.

I can see what you are saying about angular indices though. Those should normally not change for Griffin. However, what happens when someone decides to not fully close the array, or to pull back a subset of detectors to measure summing? My main point is that if we want to use indices, we are hiding information. And if we want to hide information, we need a good way to recover that information. This can be done. I think we should just be careful with how you thing about the indices.

On Nov 3, 2015, at 3:59 PM, SmithJK notifications@github.com wrote:

Okay, I can see the usefulness of having a space for those crystal-pair efficiency corrections. While I hope that after doing this a few times we don't need to run this check, if we build the infrastructure, it would provide a good check. In addition, it's possible that in some cases, the event-mixing will significantly increase the error in the bins. In that case, you might be better off doing an efficiency correction to the weighting histogram and using that to divide. This would be the more traditional way of correcting the angular correlations.

"you may not have to consider regroupings when moving the array in and out of high-efficiency mode": I may be misunderstanding what you're saying, I would expect the geometric symmetries to be constant regardless of distance of detector from implantation point.

I think the advantage of the angular indices is precisely the ability to set the angles later in the analysis process. Andrew has shown that the most likely point for energy deposition changes as a function of energy. This would mean the theta value for a pair changes as a function of energy. Angular indices allow us to use the same set of histograms for all energies and set the theta value after the energies for the cascade have been chosen.

I am very open to using THnSparse in this. Even if we continue using angular indices, it would be helpful to be able to store two three-dimensional arrays rather than 52+52 two-dimensional arrays.

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-153486310.

r3dunlop commented 9 years ago

One can also in principle apply a correction to the original angles if you want to apply energy corrections. Furthermore, if we want to consider energy as a factor, we should put something into the sort code. There is already room for an interaction point, and this would make the Sparse histograms even more powerful as every energy can be plotted at it's "true interaction" position. This means you do not have to apply corrections after the fact, and everything is already at the correct position. Once can then group the energy gated sparse histograms.

To me personally, dealing with everything in terms of angles just seems like it would make it easier as I would not have to worry about some potential mapping from indices to angles that we would have to generate (based on energy) and also store somewhere.

SmithJK commented 9 years ago

If the detectors are all at the same distance, the grouping of pairs of crystals that have the same opening angle should not change. However, you are correct in that the value of some of these will change, while others will not. We would need a separate crystal index -> angle map for these cases.

The point about cases in which all detectors are not at the same distance though is a good one. In these cases, you would need more than 52 indices. Actually, if we implement the energy-dependent deposition point, we would need 126 anyway (the angle between energy A deposited in crystal 1 and energy B deposited in crystal 2 is not the same as the angle between energy B deposited in crystal 1 and energy A deposited in crystal 2).

I like the idea of doing the angles by just making the "GetPosition" function energy dependent. I like it a lot. Would doing the analysis this way lose us the ability to divide simply by the weighting factors? Or to use the crystal-pair efficiency correction? I'm not sure.

r3dunlop commented 9 years ago

I don’t think you would lose the weighting at all because you can just generate them using a function which can build the number for you. Since one energy should generate 64 Vectors, then you should be able to say TAngularCorrelation::GenerateWeights(anergy1,energy2, *indices, *weights) or something and generate your indices from the real angles and put them into arrays. Its much harder to go in the other direction.

On Nov 4, 2015, at 12:13 PM, SmithJK notifications@github.com wrote:

If the detectors are all at the same distance, the grouping of pairs of crystals that have the same opening angle should not change. However, you are correct in that the value of some of these will change, while others will not. We would need a separate crystal index -> angle map for these cases.

The point about cases in which all detectors are not at the same distance though is a good one. In these cases, you would need more than 52 indices. Actually, if we implement the energy-dependent deposition point, we would need 126 anyway (the angle between energy A deposited in crystal 1 and energy B deposited in crystal 2 is not the same as the angle between energy B deposited in crystal 1 and energy A deposited in crystal 2).

I like the idea of doing the angles by just making the "GetPosition" function energy dependent. I like it a lot. Would doing the analysis this way lose us the ability to divide simply by the weighting factors? Or to use the crystal-pair efficiency correction? I'm not sure.

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-153795729.

pcbend commented 9 years ago

Making the GetPosition a function of energy is nearly equivalent to adding a random number generator to the position. With out a real pulse shape analysis the answer will be wrong, and off from a random point in the detector. Using a fixed point in the detector we know now the answer is not correct, we can control the systematic error in the position result be assuming the a fixed point in the detector. I believe going away from this seems quite dangerous from an error analysis point of view double so if you want to use any addback method.

On Wed, Nov 4, 2015 at 12:13 PM, SmithJK notifications@github.com wrote:

If the detectors are all at the same distance, the grouping of pairs of crystals that have the same opening angle should not change. However, you are correct in that the value of some of these will change, while others will not. We would need a separate crystal index -> angle map for these cases.

The point about cases in which all detectors are not at the same distance though is a good one. In these cases, you would need more than 52 indices. Actually, if we implement the energy-dependent deposition point, we would need 126 anyway (the angle between energy A deposited in crystal 1 and energy B deposited in crystal 2 is not the same as the angle between energy B deposited in crystal 1 and energy A deposited in crystal 2).

I like the idea of doing the angles by just making the "GetPosition" function energy dependent. I like it a lot. Would doing the analysis this way lose us the ability to divide simply by the weighting factors? Or to use the crystal-pair efficiency correction? I'm not sure.

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-153795729 .

r3dunlop commented 9 years ago

It’s true. Especially if we are generate templates in Geant4 as a function of energy, it shouldn’t matter what we call “the interaction point”.

On Nov 4, 2015, at 12:21 PM, pcbend notifications@github.com wrote:

Making the GetPosition a function of energy is nearly equivalent to adding a random number generator to the position. With out a real pulse shape analysis the answer will be wrong, and off from a random point in the detector. Using a fixed point in the detector we know now the answer is not correct, we can control the systematic error in the position result be assuming the a fixed point in the detector. I believe going away from this seems quite dangerous from an error analysis point of view double so if you want to use any addback method.

On Wed, Nov 4, 2015 at 12:13 PM, SmithJK notifications@github.com wrote:

If the detectors are all at the same distance, the grouping of pairs of crystals that have the same opening angle should not change. However, you are correct in that the value of some of these will change, while others will not. We would need a separate crystal index -> angle map for these cases.

The point about cases in which all detectors are not at the same distance though is a good one. In these cases, you would need more than 52 indices. Actually, if we implement the energy-dependent deposition point, we would need 126 anyway (the angle between energy A deposited in crystal 1 and energy B deposited in crystal 2 is not the same as the angle between energy B deposited in crystal 1 and energy A deposited in crystal 2).

I like the idea of doing the angles by just making the "GetPosition" function energy dependent. I like it a lot. Would doing the analysis this way lose us the ability to divide simply by the weighting factors? Or to use the crystal-pair efficiency correction? I'm not sure.

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-153795729 .

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-153797763.

SmithJK commented 9 years ago

I don't think you lose the weighting if you specify a particular peak energy, but I thought we were discussing a continuous GetPosition(double energy) function used in the sort code. In that case the gate has a range of energies (and thus angles) in it. If we used the angular indices and then corrected them for the peak energy manually (or at least with an angular correlation function instead of within the sort code), we wouldn't lose the weighting.

@pcbend, could you explain a bit more how assigning the position as a function of energy is the same as adding a random number generator? @AndrewMac1 has run some simulations indicating that the most likely interaction point within the crystal changes systematically as a function of energy. So then, if we assign the interaction point to that for a 1 MeV gamma (I think this is roughly how it's assigned now), then the error assigned interaction point for a 4 MeV gamma would have a systematic shift, not randomly (and I would hope rather narrowly) distributed around zero. In this case, we would have a result that is both inaccurate and imprecise, rather than just imprecise.

@r3dunlop is right - as long as we're comparing directly to a simulation that assigns the same interaction point for the crystal, we should still get the right answer.

pcbend commented 9 years ago

Comparing to a simulation could in theory make it better, but will still always be the wrong answer for the position. What makes it worse the the wrong answer one gets by assuming the one knows where the interaction takes place is now essentially an unknown amount 'wrong-ness' where as if one uses the crystal face you can weight your distribution with the energies of the gammas you are measuring. The big difference here is one is matching a distribution while the other is trying to correct for it event by event. The only successful way to determine where in the detector the interaction happened is through analysis the pulse of the single. There are many factors that determine the actually depth/location (or appeared depth/location) off an interaction, these include not only the energy but also things like the number of scatters, the polarity and any cross-talk between the crystals/electronics to name a few.

At TRIUMF you are in a unique position to test how well you can determine the position with in a griffin crystal if you choose to. There is data on disk (from the tip runs, for example) that uses the tigress array with both tigress and griffin clovers installed. A value like the Doppler reconstructed energy of a gamma is pretty sensitive to things like position. For both a griffin crystal and a tigress crystal you could try to Doppler correct the gamma-ray spectrum using the segment position from tigress and a getposition() that depends on energy for griffin. I am more than certain tigress will win here, the reason why is the position in tigress is definite down to the face of the segment, while the position in griffin is uncertain across the the crystal. In general non-segmented detectors and arrays are far simpler than there segment counterparts, if we as a community could reliable predict where in the crystal an interaction, the usefulness of the segment arrays would be limited to a very small subset of experiments.

The traditional way to handle this uncertainty in the interaction to analysis an angular distribution/correlation is by including a so-called Q-value into the calculation of the Legendre parameters. I would highly recommend going down this well trotted path before you need to convince some one that you actually determined the interaction point in a non-segment, non-pulse analysis detector to anything more than some averaged value (be that surface, middle, where-ever as long as it is constant)

On Fri, Nov 6, 2015 at 1:51 PM, SmithJK notifications@github.com wrote:

I don't think you lose the weighting if you specify a particular peak energy, but I thought we were discussing a continuous GetPosition(double energy) function used in the sort code. In that case the gate has a range of energies (and thus angles) in it. If we used the angular indices and then corrected them for the peak energy manually (or at least with an angular correlation function instead of within the sort code), we wouldn't lose the weighting.

@pcbend https://github.com/pcbend, could you explain a bit more how assigning the position as a function of energy is the same as adding a random number generator? @AndrewMac1 https://github.com/AndrewMac1 has run some simulations indicating that the most likely interaction point within the crystal changes systematically as a function of energy. So then, if we assign the interaction point to that for a 1 MeV gamma (I think this is roughly how it's assigned now), then the error assigned interaction point for a 4 MeV gamma would have a systematic shift, not randomly (and I would hope rather narrowly) distributed around zero. In this case, we would have a result that is both inaccurate and imprecise, rather than just imprecise.

@r3dunlop https://github.com/r3dunlop is right - as long as we're comparing directly to a simulation that assigns the same interaction point for the crystal, we should still get the right answer.

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-154499785 .

SmithJK commented 9 years ago

@pcbend, you mentioned a difference between "matching a distribution [and]... trying to correct for it event by event." Would this be the difference between:

and

pcbend commented 9 years ago

Yes, one simply cannot do the second method above. It is equivalent to adding a random number generator to your position. If it is small, it makes no difference and if it is large it will only wash out any effect,

The first way above is the traditionally accept way, where one actually applies such a correction to the theoretically calculated distribution before over laying it on the experimental values.. Such a correction is normally called a "Q-factor" (also now as the attenuation factor, but often given the symbol q when writing out all the parts that go into an a2 and a4) and is calculated from things like detector geometry, distance from the target, and detected energy instead of being fitted like the other parameters the a2 and a4 are composed of. There is lots of literature about these for cylindrical detectors, which to first order a griffin crystal falls under.

On Fri, Nov 20, 2015 at 4:07 PM, SmithJK notifications@github.com wrote:

@pcbend https://github.com/pcbend, you mentioned a difference between "matching a distribution [and]... trying to correct for it event by event." Would this be the difference between:

  • constructing an angular correlation between two gamma-rays, first by using an angular index to define which detectors pairs have the same geometric opening angle, and then defining the values of those opening angles based on the central energies of the two gamma-rays

and

  • correcting the position vector of a single HPGe interaction, on an event-by-event basis, based on its energy, then using those vectors to construct an angular correlation?

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-158523534 .

SmithJK commented 9 years ago

Here's my first sketch-out of what this class (namespace?) would look like. I'd appreciate feedback on... well basically anything. I've not explicitly included fitting the Legendre polynomials to the data - these are just functions for creating the angular correlation in the first place.

Members:

THnSparse* prompt; // this is the prompt spectrum, axes: array#1,array#2,energy1,energy2
THnSparse* bg; // this is the BG spectrum, same axes as prompt;
(type?) indexmap; // this should correlate array number combinations with an angular index
(type?) anglemap; // this correlates angular index with an angle
(type?) groupmap; // this specifies how grouping will occur
int* weights; // array of weights, as a function of angular index

Functions:

void SetPromptAngularCorrelation(THnSparse*);
void SetBGAngularCorrelation(THnSparse*);
TH2F* CreateEfficiencyWeightedSlice(double energy, double min, double max); // will weight each contribution from individual crystals by efficiency at energy (between min and max), and the energy projected out
TH2F* CreateSlice(THnSparse* hst, double min, double max, bool folded = kFALSE, bool grouping = kFALSE); // this will create a 2D index vs. energy spectrum from a spectrum and the map
TH2F* CreatePromptSlice(double min, double max, bool folded = kFALSE, bool grouping = kFALSE); // this will create a 2D index vs. energy spectrum from the prompt spectrum and the map
TH2F* CreateBGSlice(double min, double max, bool folded = kFALSE, bool grouping = kFALSE); // this will create a 2D index vs. energy spectrum from the BG spectrum and the map
void FitSlices(TH2F* peak,TPeak* peak); // this will fit a TH2F from CreateSlice, similar to FitSlicesY (probably utilizing that). It's output will be (at minimum) a histogram of area vs. angular index.
TGraph* CreateGraphFromHst(TH1F* hst); // this will convert a histogram with x-axis of angular index to a graph with an x-axis of angle
void PrintMap(); // will print the map, somehow
int GetAngularIndex(int arraynum1, int arraynum2); // returns the angular index for a pair of detectors
double GetAngleFromIndex(int index); // returns the opening angle for a specific angular index
double GetEfficiency(int arraynumber, double energy); // returns the efficiency from the TChannel
void SetAngleMap(double* angles); // sets the angles in the map, with an array of angles, where the angular index is determined by the index of the array element
double GenerateWeights(int* detectors); // with input of array number array (crystals that were present in data collection), generates the weights for each angular index (no input generates weights for 16 detectors)
double SetWeights(int* weights); // input is weight array itself
double GenerateIndexMap(); // not sure what kind of input we need for something other than the default
double GenerateGroupMap(); // not sure what this function would actually do

Also to modify:

SmithJK commented 9 years ago

Something else I'm not happy about here: I think the peak fitting in FitSlices will go poorly and should have some easy way to diagnose and fix the bad fits.

VinzenzBildstein commented 9 years ago

I would probably make FitSlices return the reduced chi square of the fit. That could then be used as a criteria for a good fit, and if the fit was bad, one can take a look at the fit itself (which I assume will be added to the input histogram). Of course one could just get the fit from the histogram and then the chi square and NDF from that fit, but this would be a bit easier to use directly in an if-statement.

SmithJK commented 9 years ago

Schematic of how you would create an angular correlation: correlation_creation_workflow Edit: There should also be a peak-refitting circle for the background.

SmithJK commented 9 years ago

One of the outputs of FitSlicesY is a histogram of chi-squared values and I'd like to continue that. However, if we're fitting multiple 1D histograms sliced from a 2D (which is currently what I'm imagining), there is no single chi2 value to return. I suppose I could return the highest chi2. That would be a good initial indication. I think some kind of visual confirmation/diagnostics would also be helpful here.

AndrewMac1 commented 9 years ago

Just wondering what the reason is to make the 2D projections then go to 1D and fit. Could we not just make the 1D and fit the 1D (meaning skip the 2D) or is there a reason for generating these plots (i.e. want if for cross talk or something)?

-Andrew

----- Original Message ----- From: "SmithJK" notifications@github.com To: "GRIFFINCollaboration/GRSISort" GRSISort@noreply.github.com Cc: "AndrewMac1" amacle02@uoguelph.ca Sent: Tuesday, December 1, 2015 8:14:01 PM Subject: Re: [GRSISort] Angular correlation class (#547)

Schematic of how you would create an angular correlation: correlation_creation_workflow


Reply to this email directly or view it on GitHub: https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-161147800

pcbend commented 9 years ago

I am going to piggy back a bit on Andrew's question here because this has me a bit confused... What exactly is the input to all of this? An AnalysisTree?

On Tue, Dec 1, 2015 at 8:16 PM, AndrewMac1 notifications@github.com wrote:

Just wondering what the reason is to make the 2D projections then go to 1D and fit. Could we not just make the 1D and fit the 1D (meaning skip the 2D) or is there a reason for generating these plots (i.e. want if for cross talk or something)?

-Andrew

----- Original Message ----- From: "SmithJK" notifications@github.com To: "GRIFFINCollaboration/GRSISort" GRSISort@noreply.github.com Cc: "AndrewMac1" amacle02@uoguelph.ca Sent: Tuesday, December 1, 2015 8:14:01 PM Subject: Re: [GRSISort] Angular correlation class (#547)

Schematic of how you would create an angular correlation: correlation_creation_workflow


Reply to this email directly or view it on GitHub:

https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-161147800

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-161148250 .

pcbend commented 9 years ago

or is there somehow already a correction put in for individual detectors?

On Tue, Dec 1, 2015 at 8:18 PM, Peter pcbend@gmail.com wrote:

I am going to piggy back a bit on Andrew's question here because this has me a bit confused... What exactly is the input to all of this? An AnalysisTree?

On Tue, Dec 1, 2015 at 8:16 PM, AndrewMac1 notifications@github.com wrote:

Just wondering what the reason is to make the 2D projections then go to 1D and fit. Could we not just make the 1D and fit the 1D (meaning skip the 2D) or is there a reason for generating these plots (i.e. want if for cross talk or something)?

-Andrew

----- Original Message ----- From: "SmithJK" notifications@github.com To: "GRIFFINCollaboration/GRSISort" GRSISort@noreply.github.com Cc: "AndrewMac1" amacle02@uoguelph.ca Sent: Tuesday, December 1, 2015 8:14:01 PM Subject: Re: [GRSISort] Angular correlation class (#547)

Schematic of how you would create an angular correlation: correlation_creation_workflow


Reply to this email directly or view it on GitHub:

https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-161147800

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-161148250 .

SmithJK commented 9 years ago

More details:

It's sketched out as a 2D projection first because that's how I first conceived of it. I'm very hesitant to go straight from a THnSparse full array to a PromptHistogram in a single function - I think this is hiding too much from the user. I see it less as a 2D projection and more as an efficient way of storing the 1D histograms that we need to fit. It provides a good rest point to play with BG subtraction, either pure time-random or energy-dependent.

The input would be the THnSpare histograms specified above. The idea here is to NOT immediately reduce this to angular indices so that we can (if we choose) avoid the event-mixing technique and put in individual efficiency corrections for each detector. Having this as an option provides a nice check to our event-mixed angular correlations once we have efficiencies parametrized.

r3dunlop commented 9 years ago

Not sure if this is relevant or not, but I would stay away from correcting spectra by efficiencies and only correct “numbers” by efficiencies, i.e. the fitted area. You may end up scaling the background by an efficiency weight, which I don’t believe is correct. Also it takes you into the realm of weird statistics where you would have to scale all of the errors in the histogram by the efficiency factor. Calculating the errors in a THnSparse requires essentially creating another THnSparse (at least an sparse array) in the background which is connected to the original. Any object over 1 GB cannot be written to a root file and that is very easy to do when you calculate the errors.

You must also make sure that any time you do time-random or Compton-background subtraction that you treat the errors in the bins properly. This means that sumw2() must be called on at least one of the spectra before doing the subtraction. Otherwise, the uncertainties in the areas that you obtain from doing these subtractions will be far too small. (100(10) counts - 100(10) counts = 0(14) counts, not 0(0) ). This actually isn’t strictly true (neither are the sqrt(N) errors before the subtraction), but it’s really the best approximation we have.

Subtracting time-random backgrounds will require one to create a time random histogram to subtract away and use as input. It cannot be created at this level as I believe event construction would already be handled. This may have to be stored as well.

As for the “bad fit” problem, the solution is that the user should look at every fit anyway. People have to be responsible for something. If you have 50 fits, then do something like open up 5 canvases with 10 fits on each. I would not spend time trying to figure out a figure of merit. I’m not sure if you have this in here, but it would be useful to have a way to interact directly with your histograms/graphs on a point-by-point basis in order to correct bad fits. People looking at every fit should be grateful that they didn’t have to do them all by hand in the first place.

Something I’m starting to learn is that the more that I automate things, the more I see people blindly accepting answers without looking at what is happening. I haven’t really had a chance to look into the details of what you are doing here so this may all be moot. However, what I have tried to start doing is to write classes as more of a framework for others to build off of. i.e. TDecay, for the most part will not be used stand-alone. I have a 4-500 line program that uses it to do the analysis the way I want. I could have made TDecay contain these things (chop-plots, dead-time corrections etc), but I think this stuff is better to have in a script because it’s more transparent, put onus on the user, and is quite honestly a lot more flexible.

I have faith in what you are doing, I’m just offering my two cents based on the mistakes I have made. Make useful functions, make the users the ones who use them. We want an angular correlation framework, not an API.

Again, you may have already thought about all of this and it may all be useless, so feel free to tell me to go away :P.

On Dec 1, 2015, at 8:36 PM, SmithJK notifications@github.com wrote:

More details:

PromptArray is an input histogram with axes of arraynumber1, arraynumber2, energy1, and energy2. By keeping this as a pair of array numbers and not an angular index, we keep the information Ryan was concerned about losing. PromptSlice is gated on energy1 and has axes of angular index and energy2. If you don't want to do event-mixing, but instead want to divide out the efficiencies individually, you can create an efficiency-weighted slice. CorrectedSlice is an option to correct the histogram for energy-dependent or time-random backgrounds. Same axes as PromptSlice. PromptHistogram is a 1D histogram with the x-axis as the angular index and the bin content set to the area of the coincidence peak for that angular index. IndexCorrelation is a 1D histogram with the x-axis as the angular index and the bin content set to the event-mixed corrected peak areas. AngularCorrelation is a TGraph (TGraphAsymmErrors?) with the x-axis as the opening angle and the y-axis as the event-mixed corrected peak areas. It's sketched out as a 2D projection first because that's how I first conceived of it. I'm very hesitant to go straight from a THnSparse full array to a PromptHistogram in a single function - I think this is hiding too much from the user. I see it less as a 2D projection and more as an efficient way of storing the 1D histograms that we need to fit. It provides a good rest point to play with BG subtraction, either pure time-random or energy-dependent.

The input would be the THnSpare histograms specified above. The idea here is to NOT immediately reduce this to angular indices so that we can (if we choose) avoid the event-mixing technique and put in individual efficiency corrections for each detector. Having this as an option provides a nice check to our event-mixed angular correlations once we have efficiencies parametrized.

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-161151695.

pcbend commented 9 years ago

ok, so I am still confused here.

PromptArray: so, is this essentially a gamma-gamma matrix for every combination of detectors, just so the axis went something like (detnum*16000)+ E_gamma? This seems highly unlikely as that would be a huge matrix that would make no sense to keep around, I must be understanding this wrong...

Propmt slice: we have moved away from talking about array position here to angular index, this I think i get but do not see why we would need a "PromptArray" to get it. Also what exactly is event-mixing?

CorrectedSlice: Are we not worried about detector or geometrical(target-beam/pipe/something in the way) effects? In my past experiences with angular distributions/correlations they play a much large effect than any opening angle/energy dependencies.... also they can not be corrected for the same way energy and solid angle corrections can be done in the the calculation of the individual parts that compose a2 and a4. I am also I bit confused why anyone would make it this far with out having already subtracted their time randdom background.

next three sound like the same thing to me, again i do not know what event-mixing is here.

On Tue, Dec 1, 2015 at 8:36 PM, SmithJK notifications@github.com wrote:

More details:

  • PromptArray is an input histogram with axes of arraynumber1, arraynumber2, energy1, and energy2. By keeping this as a pair of array numbers and not an angular index, we keep the information Ryan was concerned about losing.
  • PromptSlice is gated on energy1 and has axes of angular index and energy2. If you don't want to do event-mixing, but instead want to divide out the efficiencies individually, you can create an efficiency-weighted slice.
  • CorrectedSlice is an option to correct the histogram for energy-dependent or time-random backgrounds. Same axes as PromptSlice.
  • PromptHistogram is a 1D histogram with the x-axis as the angular index and the bin content set to the area of the coincidence peak for that angular index.
  • IndexCorrelation is a 1D histogram with the x-axis as the angular index and the bin content set to the event-mixed corrected peak areas.
  • AngularCorrelation is a TGraph (TGraphAsymmErrors?) with the x-axis as the opening angle and the y-axis as the event-mixed corrected peak areas.

It's sketched out as a 2D projection first because that's how I first conceived of it. I'm very hesitant to go straight from a THnSparse full array to a PromptHistogram in a single function - I think this is hiding too much from the user. I see it less as a 2D projection and more as an efficient way of storing the 1D histograms that we need to fit. It provides a good rest point to play with BG subtraction, either pure time-random or energy-dependent.

The input would be the THnSpare histograms specified above. The idea here is to NOT immediately reduce this to angular indices so that we can (if we choose) avoid the event-mixing technique and put in individual efficiency corrections for each detector. Having this as an option provides a nice check to our event-mixed angular correlations once we have efficiencies parametrized.

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-161151695 .

r3dunlop commented 9 years ago

@pcbend, To your first point, you can get around the huge matrix by creating a sparse matrix. It only allocates the memory for the bins that are actually used. I've also been playing around with writing trees that behave as matrices, but that's a whole other story....

Event mixing is when you "build a correlation" between the two gamma rays, but your time gate isn't prompt anymore. Instead what you do is you look across many other detected gamma rays, which are far enough away to have to be uncorrelated. What this does is present potential correlations that are not electromagnetic decay in nature (geometry, DAQ, etc..), as this spectrum should be completely uncorrelated. You can then correct your true correlation by this "uncorrelated", "event mixed", data.

I also agree with you with the time-random bg subtraction as per my message above.

pcbend commented 9 years ago

To the first point, i still don't see the value of this. I think providing such a matrix will allow for a false sense of security. There are many nuances that go into making such a plot and I am not convinced that a one-size-fits-all approach for creating such a histogram will work. Having the person doing the analysis spend some time thinking about what they are making I think will not only help the person do the analysis gain a better understand but is inviting more minds into thinking about how to make things better. Helping people make gamma-gamma correlations is a great idea, but shouldn't it start from a angle-versus counts plot (however it is called), and than help people extract the a2/a4 from it, and more importantly the under-lining physics of those consents given a set of J-values? This is the type of crank that I see as being more similar from one analysis to the next.

ok, so "event-mixing" events are problems with daq, cross-talk, etc? So if we have a method for pulling these out, these should be treated the same as time-random subtraction. There is no reason to even think about having them in a matrix one is trying to analysis especially if we know the are a bg source. Correlations can only be measured from things that are emitted in a cascade, where both gammas are detected and correlated as being emitted from the same cascade. On top of that, the lifetime of the intermediate state must be short (less than ns-ish) in order for the nucleus to have any memory of the orientation. Allowing things to even potentially exist in a spectrum one is try to get a correlation from can only wash out the effect.

Perhaps I am missing a bigger picture here. If I am, please correct me.

On Wed, Dec 2, 2015 at 9:24 AM, Ryan Dunlop notifications@github.com wrote:

@pcbend https://github.com/pcbend, To your first point, you can get around the huge matrix by creating a sparse matrix. It only allocates the memory for the bins that are actually used. I've also been playing around with writing trees that behave as matrices, but that's a whole other story....

Event mixing is when you "build a correlation" between the two gamma rays, but your time gate isn't prompt anymore. Instead what you do is you look across many other detected gamma rays, which are far enough away to have to be uncorrelated. What this does is present potential correlations that are not electromagnetic decay in nature (geometry, DAQ, etc..), as this spectrum should be completely uncorrelated. You can then correct your true correlation by this "uncorrelated", "event mixed", data.

I also agree with you with the time-random bg subtraction as per my message above.

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-161313385 .

r3dunlop commented 9 years ago

The event-mixing is being handled like time-random subtraction (I believe). It isn’t in the same matrix with the real data.

I strongly agree with the removal of crank turning though. It’s amazing how many bugs I have found in old analysis software (or issues with thinking about how to perform an analysis) because a program was written to perform the job once, and then was never considered again. My point above was to make basic functions that perform simple, obvious tasks which would go into an analysis like this. That might include the creation of the so called angular index map, or “folding”, but I agree that we must be very careful about how much this does that people don’t have to do themselves.

The conversation of: 1: I have a bad result and I don’t know why 2: Have you checked the individual fits are ok? 1: No I haven’t

has happened far more often than I would like to admit. This doesn’t only mean people aren’t necessarily thinking about an analysis, but can lead to incorrect results. We should try to do whatever we can to avoid this, and if this means making it slightly less convenient by making people handle intermediate steps, then this is what should be done.

On Dec 2, 2015, at 9:59 AM, pcbend notifications@github.com wrote:

To the first point, i still don't see the value of this. I think providing such a matrix will allow for a false sense of security. There are many nuances that go into making such a plot and I am not convinced that a one-size-fits-all approach for creating such a histogram will work. Having the person doing the analysis spend some time thinking about what they are making I think will not only help the person do the analysis gain a better understand but is inviting more minds into thinking about how to make things better. Helping people make gamma-gamma correlations is a great idea, but shouldn't it start from a angle-versus counts plot (however it is called), and than help people extract the a2/a4 from it, and more importantly the under-lining physics of those consents given a set of J-values? This is the type of crank that I see as being more similar from one analysis to the next.

ok, so "event-mixing" events are problems with daq, cross-talk, etc? So if we have a method for pulling these out, these should be treated the same as time-random subtraction. There is no reason to even think about having them in a matrix one is trying to analysis especially if we know the are a bg source. Correlations can only be measured from things that are emitted in a cascade, where both gammas are detected and correlated as being emitted from the same cascade. On top of that, the lifetime of the intermediate state must be short (less than ns-ish) in order for the nucleus to have any memory of the orientation. Allowing things to even potentially exist in a spectrum one is try to get a correlation from can only wash out the effect.

Perhaps I am missing a bigger picture here. If I am, please correct me.

On Wed, Dec 2, 2015 at 9:24 AM, Ryan Dunlop notifications@github.com wrote:

@pcbend https://github.com/pcbend, To your first point, you can get around the huge matrix by creating a sparse matrix. It only allocates the memory for the bins that are actually used. I've also been playing around with writing trees that behave as matrices, but that's a whole other story....

Event mixing is when you "build a correlation" between the two gamma rays, but your time gate isn't prompt anymore. Instead what you do is you look across many other detected gamma rays, which are far enough away to have to be uncorrelated. What this does is present potential correlations that are not electromagnetic decay in nature (geometry, DAQ, etc..), as this spectrum should be completely uncorrelated. You can then correct your true correlation by this "uncorrelated", "event mixed", data.

I also agree with you with the time-random bg subtraction as per my message above.

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-161313385 .

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-161324423.

pcbend commented 9 years ago

I think Ryan and I are on the same page as I agree completely with the above statement. What worried me about the presented approach is how encompassing it is and how the majority of the work presented here had very little to do with the angular-correlations but mostly how one gets the data to do an angular-correlation.

I am all for providing tools to help people do analysis (as this project is a testament too) but think it is a huge mistake to give someone something (or looks like it is something) that can do the analysis for them (in one-two steps). There can be a substantial grey area between the two ideas, but having something that gets people to question what they are doing I think makes for a saner life.

On Wed, Dec 2, 2015 at 10:19 AM, Ryan Dunlop notifications@github.com wrote:

The event-mixing is being handled like time-random subtraction (I believe). It isn’t in the same matrix with the real data.

I strongly agree with the removal of crank turning though. It’s amazing how many bugs I have found in old analysis software (or issues with thinking about how to perform an analysis) because a program was written to perform the job once, and then was never considered again. My point above was to make basic functions that perform simple, obvious tasks which would go into an analysis like this. That might include the creation of the so called angular index map, or “folding”, but I agree that we must be very careful about how much this does that people don’t have to do themselves.

The conversation of: 1: I have a bad result and I don’t know why 2: Have you checked the individual fits are ok? 1: No I haven’t

has happened far more often than I would like to admit. This doesn’t only mean people aren’t necessarily thinking about an analysis, but can lead to incorrect results. We should try to do whatever we can to avoid this, and if this means making it slightly less convenient by making people handle intermediate steps, then this is what should be done.

On Dec 2, 2015, at 9:59 AM, pcbend notifications@github.com wrote:

To the first point, i still don't see the value of this. I think providing such a matrix will allow for a false sense of security. There are many nuances that go into making such a plot and I am not convinced that a one-size-fits-all approach for creating such a histogram will work. Having the person doing the analysis spend some time thinking about what they are making I think will not only help the person do the analysis gain a better understand but is inviting more minds into thinking about how to make things better. Helping people make gamma-gamma correlations is a great idea, but shouldn't it start from a angle-versus counts plot (however it is called), and than help people extract the a2/a4 from it, and more importantly the under-lining physics of those consents given a set of J-values? This is the type of crank that I see as being more similar from one analysis to the next.

ok, so "event-mixing" events are problems with daq, cross-talk, etc? So if we have a method for pulling these out, these should be treated the same as time-random subtraction. There is no reason to even think about having them in a matrix one is trying to analysis especially if we know the are a bg source. Correlations can only be measured from things that are emitted in a cascade, where both gammas are detected and correlated as being emitted from the same cascade. On top of that, the lifetime of the intermediate state must be short (less than ns-ish) in order for the nucleus to have any memory of the orientation. Allowing things to even potentially exist in a spectrum one is try to get a correlation from can only wash out the effect.

Perhaps I am missing a bigger picture here. If I am, please correct me.

On Wed, Dec 2, 2015 at 9:24 AM, Ryan Dunlop notifications@github.com wrote:

@pcbend https://github.com/pcbend, To your first point, you can get around the huge matrix by creating a sparse matrix. It only allocates the memory for the bins that are actually used. I've also been playing around with writing trees that behave as matrices, but that's a whole other story....

Event mixing is when you "build a correlation" between the two gamma rays, but your time gate isn't prompt anymore. Instead what you do is you look across many other detected gamma rays, which are far enough away to have to be uncorrelated. What this does is present potential correlations that are not electromagnetic decay in nature (geometry, DAQ, etc..), as this spectrum should be completely uncorrelated. You can then correct your true correlation by this "uncorrelated", "event mixed", data.

I also agree with you with the time-random bg subtraction as per my message above.

— Reply to this email directly or view it on GitHub < https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-161313385

.

— Reply to this email directly or view it on GitHub < https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-161324423 .

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-161329842 .

SmithJK commented 9 years ago

I agree with Ryan's statements of "make basic functions that perform simple, obvious tasks". I also agree that we "must be very careful about how much this does that people don’t have to do themselves." So in terms of coding philosophy, I think we are pretty close.

My sketch-up of how someone would do an angular correlation is a guide to me in creating basic functions that would be useful in this context. Most of the physics will go into creating the THnSparse matrices (in particular, the BG/event-mixed spectrum) that are input, which is left out entirely here. The remaining physics is in the background subtraction, manual efficiency corrections (in lieu of event-mixing), and error propagation. I don't plan on explicitly including the background subtraction, but I would like to include functions that assist with manual efficiency corrections (assuming efficiency polynomials have already been programmed into the TChannel) and do the error propagation correctly for any of the functions involved here. The list of functions from earlier is in line with this philosophy:

Ryan, I understand your concern about efficiency-scaling. Though there is definitely some efficiency for detecting background radiation in the first place, the input efficiency factors are calculated for a source in the middle of the array. Ideally, I agree that we should only scale peak areas. Practically, the motivation for scaling individual events comes from a need to do the efficiency scaling on a crystal pair basis before summing the crystal pairs that have the same opening angle. Fitting a peak seen by only a single crystal pair before correcting for efficiency will restrict us to analyzing high-statistics cascades in which each individual crystal pair sees a fit-able peak - very high statistics cascades. Though, if we are using this only as a sanity-check for the event-mixing, that might be okay. I didn't know that errors in a THnSparse had to be a full second histogram. That would be a reason to only apply the efficiency corrections when applying a cut to create a TH2.

I fully intend on calculating errors properly and as the code is actually written, this is certainly something multiple people should check.

I think we can use the event-mixing background spectrum to do the time-random background subtraction. The catch is finding the proper scaling factor. If it works, we could have a high-statistics background subtraction. If it doesn't, then yes, this would require a separate histogram.

Peter, I definitely want to create some processes for extracting the proper a2 and a4 values and tools to assist in identifying spin/mixing-ratio conclusions drawn from those. It's rattling around in my head and in some discussions with Andrew, but I'm focusing on the "generating a counts vs. angles plot" right now.

pcbend commented 9 years ago

So I see no reason to make a standard generating counts vs angles. The first matrix containing 64x64 individual gamma-gamma is not needed. If all one carried about was book keeping the angular index, the same output could be accomplished in a single loop gamma-gamma loop over the tree, the same type of loop need to make such a matrix:

for(gamma1) for(gamma2)

hist->Fill(gamma1.GetPosition().Angle(gamma2.GetPosition()),gamma1.Energy());

hist->Fill(gamma2.GetPosition().Angle(gamma1.GetPosition()),gamma2.Energy());

using a small fraction of what one would use with the former method. This also eliminates the need for the three extra functions, assuming we use a normal project command on the energy axis of the above histogram. Additionally errors are completely punted to root this way. Than all one needs to worry about is how to construct a TF1 out of ROOT::Math::legendre([0],x) to fit it. Of course one does need to pick the gamma energies they are looking for, but is this really such a hindrance? Of course in the loop one could create as many gamma-gamma gates as one pleases.

Of course this only works for a crude analysis because it does not take into account individual detector efficiencies and collection times for each position which must be calculated for every experiment independently. Going to this point requires making individual pair-wise matrices and scaling each by epsilon_gamma1 * epsilon_gamma2; but working with 64 individual THN's will be much faster than trying to work with one giant THN, in fact we could get creative with file naming and use hadd to do a lot of the bookkeeping and heavy lifting work for us.

As for event-mixing (which I know understand, thank you :) ) and other bg-subtractions; both of these are also going to be very experiment dependent. I still don't see them fitting well into a crank.

On Wed, Dec 2, 2015 at 1:41 PM, SmithJK notifications@github.com wrote:

I agree with Ryan's statements of "make basic functions that perform simple, obvious tasks". I also agree that we "must be very careful about how much this does that people don’t have to do themselves." So in terms of coding philosophy, I think we are pretty close.

My sketch-up of how someone would do an angular correlation is a guide to me in creating basic functions that would be useful in this context. Most of the physics will go into creating the THnSparse matrices (in particular, the BG/event-mixed spectrum) that are input, which is left out entirely here. The remaining physics is in the background subtraction, manual efficiency corrections (in lieu of event-mixing), and error propagation. I don't plan on explicitly including the background subtraction, but I would like to include functions that assist with manual efficiency corrections (assuming efficiency polynomials have already been programmed into the TChannel) and do the error propagation correctly for any of the functions involved here. The list of functions from earlier is in line with this philosophy:

  • CreateSlice "will create a 2D index vs. energy spectrum from a spectrum and the map" - this is mostly book-keeping (which angle pairs have the same opening angle) and a single energy gate. CreatePromptSlice and CreateBGSlice will just apply the CreateSlice function to the already assigned Prompt and BG/Event-mixed matrices. CreateEfficiencyWeightedSlice would do the same, with some efficiencies included (see below for more discussion on this).
  • FitSlices would provide automation assistance to fitting the peaks. Here is where I think we can definitely have the discussion of what degree of automation assistance is ideal. The function I've written for my own analysis prints all the fits to screen and requires the user to confirm the fit or suggest a new minimum and a new maximum to try another fit.
  • CreateGraphFromHst is another book-keeping/conversion function. I've found a TGraphAsymmErrors as a good choice for the final visualization, but depending on the fitting we do to extract a2 and a4, this may be more useful as another TH1.

Ryan, I understand your concern about efficiency-scaling. Though there is definitely some efficiency for detecting background radiation in the first place, the input efficiency factors are calculated for a source in the middle of the array. Ideally, I agree that we should only scale peak areas. Practically, the motivation for scaling individual events comes from a need to do the efficiency scaling on a crystal pair basis before summing the crystal pairs that have the same opening angle. Fitting a peak seen by only a single crystal pair before correcting for efficiency will restrict us to analyzing high-statistics cascades in which each individual crystal pair sees a fit-able peak - very high statistics cascades. Though, if we are using this only as a sanity-check for the event-mixing, that might be okay. I didn't know that errors in a THnSparse had to be a full second histogram. That would be a reason to only apply the efficiency corrections when applying a cut to create a TH2.

I fully intend on calculating errors properly and as the code is actually written, this is certainly something multiple people should check.

I think we can use the event-mixing background spectrum to do the time-random background subtraction. The catch is finding the proper scaling factor. If it works, we could have a high-statistics background subtraction. If it doesn't, then yes, this would require a separate histogram.

Peter, I definitely want to create some processes for extracting the proper a2 and a4 values and tools to assist in identifying spin/mixing-ratio conclusions drawn from those. It's rattling around in my head and in some discussions with Andrew, but I'm focusing on the "generating a counts vs. angles plot" right now.

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-161393594 .

pcbend commented 9 years ago

Upon thinking about this a bit more, I'd like to go back on what I said a bit.

I feel "not needed" is way to strong a term, I think "it seems silly" to me would be much more appropriate. I can see the value in what is proposed, I still think it is over kill to get the job done.

On Wed, Dec 2, 2015 at 2:52 PM, Peter pcbend@gmail.com wrote:

So I see no reason to make a standard generating counts vs angles. The first matrix containing 64x64 individual gamma-gamma is not needed. If all one carried about was book keeping the angular index, the same output could be accomplished in a single loop gamma-gamma loop over the tree, the same type of loop need to make such a matrix:

for(gamma1) for(gamma2)

hist->Fill(gamma1.GetPosition().Angle(gamma2.GetPosition()),gamma1.Energy());

hist->Fill(gamma2.GetPosition().Angle(gamma1.GetPosition()),gamma2.Energy());

using a small fraction of what one would use with the former method. This also eliminates the need for the three extra functions, assuming we use a normal project command on the energy axis of the above histogram. Additionally errors are completely punted to root this way. Than all one needs to worry about is how to construct a TF1 out of ROOT::Math::legendre([0],x) to fit it. Of course one does need to pick the gamma energies they are looking for, but is this really such a hindrance? Of course in the loop one could create as many gamma-gamma gates as one pleases.

Of course this only works for a crude analysis because it does not take into account individual detector efficiencies and collection times for each position which must be calculated for every experiment independently. Going to this point requires making individual pair-wise matrices and scaling each by epsilon_gamma1 * epsilon_gamma2; but working with 64 individual THN's will be much faster than trying to work with one giant THN, in fact we could get creative with file naming and use hadd to do a lot of the bookkeeping and heavy lifting work for us.

As for event-mixing (which I know understand, thank you :) ) and other bg-subtractions; both of these are also going to be very experiment dependent. I still don't see them fitting well into a crank.

On Wed, Dec 2, 2015 at 1:41 PM, SmithJK notifications@github.com wrote:

I agree with Ryan's statements of "make basic functions that perform simple, obvious tasks". I also agree that we "must be very careful about how much this does that people don’t have to do themselves." So in terms of coding philosophy, I think we are pretty close.

My sketch-up of how someone would do an angular correlation is a guide to me in creating basic functions that would be useful in this context. Most of the physics will go into creating the THnSparse matrices (in particular, the BG/event-mixed spectrum) that are input, which is left out entirely here. The remaining physics is in the background subtraction, manual efficiency corrections (in lieu of event-mixing), and error propagation. I don't plan on explicitly including the background subtraction, but I would like to include functions that assist with manual efficiency corrections (assuming efficiency polynomials have already been programmed into the TChannel) and do the error propagation correctly for any of the functions involved here. The list of functions from earlier is in line with this philosophy:

  • CreateSlice "will create a 2D index vs. energy spectrum from a spectrum and the map" - this is mostly book-keeping (which angle pairs have the same opening angle) and a single energy gate. CreatePromptSlice and CreateBGSlice will just apply the CreateSlice function to the already assigned Prompt and BG/Event-mixed matrices. CreateEfficiencyWeightedSlice would do the same, with some efficiencies included (see below for more discussion on this).
  • FitSlices would provide automation assistance to fitting the peaks. Here is where I think we can definitely have the discussion of what degree of automation assistance is ideal. The function I've written for my own analysis prints all the fits to screen and requires the user to confirm the fit or suggest a new minimum and a new maximum to try another fit.
  • CreateGraphFromHst is another book-keeping/conversion function. I've found a TGraphAsymmErrors as a good choice for the final visualization, but depending on the fitting we do to extract a2 and a4, this may be more useful as another TH1.

Ryan, I understand your concern about efficiency-scaling. Though there is definitely some efficiency for detecting background radiation in the first place, the input efficiency factors are calculated for a source in the middle of the array. Ideally, I agree that we should only scale peak areas. Practically, the motivation for scaling individual events comes from a need to do the efficiency scaling on a crystal pair basis before summing the crystal pairs that have the same opening angle. Fitting a peak seen by only a single crystal pair before correcting for efficiency will restrict us to analyzing high-statistics cascades in which each individual crystal pair sees a fit-able peak - very high statistics cascades. Though, if we are using this only as a sanity-check for the event-mixing, that might be okay. I didn't know that errors in a THnSparse had to be a full second histogram. That would be a reason to only apply the efficiency corrections when applying a cut to create a TH2.

I fully intend on calculating errors properly and as the code is actually written, this is certainly something multiple people should check.

I think we can use the event-mixing background spectrum to do the time-random background subtraction. The catch is finding the proper scaling factor. If it works, we could have a high-statistics background subtraction. If it doesn't, then yes, this would require a separate histogram.

Peter, I definitely want to create some processes for extracting the proper a2 and a4 values and tools to assist in identifying spin/mixing-ratio conclusions drawn from those. It's rattling around in my head and in some discussions with Andrew, but I'm focusing on the "generating a counts vs. angles plot" right now.

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-161393594 .

pcbend commented 9 years ago

Still thinking about the 64x64xEgamma1XEgamma2 matrix and I am struggling here a bit. Less say we have a full range of 2 MeV at half a keV per channel. That 4000 channels per "matrix-axis", assuming a traditional 2d matrix filled with ints, that would give a size of 4000 x 4000 x 64 x 64 or 65.536 GB. Less say half the bins have at least one count in them and the THN's compression algorithm is perfect, that would still be a 32 GB matrix. This seems out of the realm of feasible analysis for pretty much everyone's computer, even if we are able to cut down by another factor of four...

On Wed, Dec 2, 2015 at 4:00 PM, Peter pcbend@gmail.com wrote:

Upon thinking about this a bit more, I'd like to go back on what I said a bit.

I feel "not needed" is way to strong a term, I think "it seems silly" to me would be much more appropriate. I can see the value in what is proposed, I still think it is over kill to get the job done.

On Wed, Dec 2, 2015 at 2:52 PM, Peter pcbend@gmail.com wrote:

So I see no reason to make a standard generating counts vs angles. The first matrix containing 64x64 individual gamma-gamma is not needed. If all one carried about was book keeping the angular index, the same output could be accomplished in a single loop gamma-gamma loop over the tree, the same type of loop need to make such a matrix:

for(gamma1) for(gamma2)

hist->Fill(gamma1.GetPosition().Angle(gamma2.GetPosition()),gamma1.Energy());

hist->Fill(gamma2.GetPosition().Angle(gamma1.GetPosition()),gamma2.Energy());

using a small fraction of what one would use with the former method. This also eliminates the need for the three extra functions, assuming we use a normal project command on the energy axis of the above histogram. Additionally errors are completely punted to root this way. Than all one needs to worry about is how to construct a TF1 out of ROOT::Math::legendre([0],x) to fit it. Of course one does need to pick the gamma energies they are looking for, but is this really such a hindrance? Of course in the loop one could create as many gamma-gamma gates as one pleases.

Of course this only works for a crude analysis because it does not take into account individual detector efficiencies and collection times for each position which must be calculated for every experiment independently. Going to this point requires making individual pair-wise matrices and scaling each by epsilon_gamma1 * epsilon_gamma2; but working with 64 individual THN's will be much faster than trying to work with one giant THN, in fact we could get creative with file naming and use hadd to do a lot of the bookkeeping and heavy lifting work for us.

As for event-mixing (which I know understand, thank you :) ) and other bg-subtractions; both of these are also going to be very experiment dependent. I still don't see them fitting well into a crank.

On Wed, Dec 2, 2015 at 1:41 PM, SmithJK notifications@github.com wrote:

I agree with Ryan's statements of "make basic functions that perform simple, obvious tasks". I also agree that we "must be very careful about how much this does that people don’t have to do themselves." So in terms of coding philosophy, I think we are pretty close.

My sketch-up of how someone would do an angular correlation is a guide to me in creating basic functions that would be useful in this context. Most of the physics will go into creating the THnSparse matrices (in particular, the BG/event-mixed spectrum) that are input, which is left out entirely here. The remaining physics is in the background subtraction, manual efficiency corrections (in lieu of event-mixing), and error propagation. I don't plan on explicitly including the background subtraction, but I would like to include functions that assist with manual efficiency corrections (assuming efficiency polynomials have already been programmed into the TChannel) and do the error propagation correctly for any of the functions involved here. The list of functions from earlier is in line with this philosophy:

  • CreateSlice "will create a 2D index vs. energy spectrum from a spectrum and the map" - this is mostly book-keeping (which angle pairs have the same opening angle) and a single energy gate. CreatePromptSlice and CreateBGSlice will just apply the CreateSlice function to the already assigned Prompt and BG/Event-mixed matrices. CreateEfficiencyWeightedSlice would do the same, with some efficiencies included (see below for more discussion on this).
  • FitSlices would provide automation assistance to fitting the peaks. Here is where I think we can definitely have the discussion of what degree of automation assistance is ideal. The function I've written for my own analysis prints all the fits to screen and requires the user to confirm the fit or suggest a new minimum and a new maximum to try another fit.
  • CreateGraphFromHst is another book-keeping/conversion function. I've found a TGraphAsymmErrors as a good choice for the final visualization, but depending on the fitting we do to extract a2 and a4, this may be more useful as another TH1.

Ryan, I understand your concern about efficiency-scaling. Though there is definitely some efficiency for detecting background radiation in the first place, the input efficiency factors are calculated for a source in the middle of the array. Ideally, I agree that we should only scale peak areas. Practically, the motivation for scaling individual events comes from a need to do the efficiency scaling on a crystal pair basis before summing the crystal pairs that have the same opening angle. Fitting a peak seen by only a single crystal pair before correcting for efficiency will restrict us to analyzing high-statistics cascades in which each individual crystal pair sees a fit-able peak - very high statistics cascades. Though, if we are using this only as a sanity-check for the event-mixing, that might be okay. I didn't know that errors in a THnSparse had to be a full second histogram. That would be a reason to only apply the efficiency corrections when applying a cut to create a TH2.

I fully intend on calculating errors properly and as the code is actually written, this is certainly something multiple people should check.

I think we can use the event-mixing background spectrum to do the time-random background subtraction. The catch is finding the proper scaling factor. If it works, we could have a high-statistics background subtraction. If it doesn't, then yes, this would require a separate histogram.

Peter, I definitely want to create some processes for extracting the proper a2 and a4 values and tools to assist in identifying spin/mixing-ratio conclusions drawn from those. It's rattling around in my head and in some discussions with Andrew, but I'm focusing on the "generating a counts vs. angles plot" right now.

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-161393594 .

r3dunlop commented 9 years ago

You would be surprised at how few bins are filled in a gamma gamma matrix, but I agree, this might be pushing it. I think efficiency corrections can't be a standard, or we must come up with an effective way of doing them.

On Dec 2, 2015, at 4:57 PM, pcbend notifications@github.com wrote:

Still thinking about the 64x64xEgamma1XEgamma2 matrix and I am struggling here a bit. Less say we have a full range of 2 MeV at half a keV per channel. That 4000 channels per "matrix-axis", assuming a traditional 2d matrix filled with ints, that would give a size of 4000 x 4000 x 64 x 64 or 65.536 GB. Less say half the bins have at least one count in them and the THN's compression algorithm is perfect, that would still be a 32 GB matrix. This seems out of the realm of feasible analysis for pretty much everyone's computer, even if we are able to cut down by another factor of four...

On Wed, Dec 2, 2015 at 4:00 PM, Peter pcbend@gmail.com wrote:

Upon thinking about this a bit more, I'd like to go back on what I said a bit.

I feel "not needed" is way to strong a term, I think "it seems silly" to me would be much more appropriate. I can see the value in what is proposed, I still think it is over kill to get the job done.

On Wed, Dec 2, 2015 at 2:52 PM, Peter pcbend@gmail.com wrote:

So I see no reason to make a standard generating counts vs angles. The first matrix containing 64x64 individual gamma-gamma is not needed. If all one carried about was book keeping the angular index, the same output could be accomplished in a single loop gamma-gamma loop over the tree, the same type of loop need to make such a matrix:

for(gamma1) for(gamma2)

hist->Fill(gamma1.GetPosition().Angle(gamma2.GetPosition()),gamma1.Energy());

hist->Fill(gamma2.GetPosition().Angle(gamma1.GetPosition()),gamma2.Energy());

using a small fraction of what one would use with the former method. This also eliminates the need for the three extra functions, assuming we use a normal project command on the energy axis of the above histogram. Additionally errors are completely punted to root this way. Than all one needs to worry about is how to construct a TF1 out of ROOT::Math::legendre([0],x) to fit it. Of course one does need to pick the gamma energies they are looking for, but is this really such a hindrance? Of course in the loop one could create as many gamma-gamma gates as one pleases.

Of course this only works for a crude analysis because it does not take into account individual detector efficiencies and collection times for each position which must be calculated for every experiment independently. Going to this point requires making individual pair-wise matrices and scaling each by epsilon_gamma1 * epsilon_gamma2; but working with 64 individual THN's will be much faster than trying to work with one giant THN, in fact we could get creative with file naming and use hadd to do a lot of the bookkeeping and heavy lifting work for us.

As for event-mixing (which I know understand, thank you :) ) and other bg-subtractions; both of these are also going to be very experiment dependent. I still don't see them fitting well into a crank.

On Wed, Dec 2, 2015 at 1:41 PM, SmithJK notifications@github.com wrote:

I agree with Ryan's statements of "make basic functions that perform simple, obvious tasks". I also agree that we "must be very careful about how much this does that people don’t have to do themselves." So in terms of coding philosophy, I think we are pretty close.

My sketch-up of how someone would do an angular correlation is a guide to me in creating basic functions that would be useful in this context. Most of the physics will go into creating the THnSparse matrices (in particular, the BG/event-mixed spectrum) that are input, which is left out entirely here. The remaining physics is in the background subtraction, manual efficiency corrections (in lieu of event-mixing), and error propagation. I don't plan on explicitly including the background subtraction, but I would like to include functions that assist with manual efficiency corrections (assuming efficiency polynomials have already been programmed into the TChannel) and do the error propagation correctly for any of the functions involved here. The list of functions from earlier is in line with this philosophy:

  • CreateSlice "will create a 2D index vs. energy spectrum from a spectrum and the map" - this is mostly book-keeping (which angle pairs have the same opening angle) and a single energy gate. CreatePromptSlice and CreateBGSlice will just apply the CreateSlice function to the already assigned Prompt and BG/Event-mixed matrices. CreateEfficiencyWeightedSlice would do the same, with some efficiencies included (see below for more discussion on this).
  • FitSlices would provide automation assistance to fitting the peaks. Here is where I think we can definitely have the discussion of what degree of automation assistance is ideal. The function I've written for my own analysis prints all the fits to screen and requires the user to confirm the fit or suggest a new minimum and a new maximum to try another fit.
  • CreateGraphFromHst is another book-keeping/conversion function. I've found a TGraphAsymmErrors as a good choice for the final visualization, but depending on the fitting we do to extract a2 and a4, this may be more useful as another TH1.

Ryan, I understand your concern about efficiency-scaling. Though there is definitely some efficiency for detecting background radiation in the first place, the input efficiency factors are calculated for a source in the middle of the array. Ideally, I agree that we should only scale peak areas. Practically, the motivation for scaling individual events comes from a need to do the efficiency scaling on a crystal pair basis before summing the crystal pairs that have the same opening angle. Fitting a peak seen by only a single crystal pair before correcting for efficiency will restrict us to analyzing high-statistics cascades in which each individual crystal pair sees a fit-able peak - very high statistics cascades. Though, if we are using this only as a sanity-check for the event-mixing, that might be okay. I didn't know that errors in a THnSparse had to be a full second histogram. That would be a reason to only apply the efficiency corrections when applying a cut to create a TH2.

I fully intend on calculating errors properly and as the code is actually written, this is certainly something multiple people should check.

I think we can use the event-mixing background spectrum to do the time-random background subtraction. The catch is finding the proper scaling factor. If it works, we could have a high-statistics background subtraction. If it doesn't, then yes, this would require a separate histogram.

Peter, I definitely want to create some processes for extracting the proper a2 and a4 values and tools to assist in identifying spin/mixing-ratio conclusions drawn from those. It's rattling around in my head and in some discussions with Andrew, but I'm focusing on the "generating a counts vs. angles plot" right now.

— Reply to this email directly or view it on GitHub https://github.com/GRIFFINCollaboration/GRSISort/issues/547#issuecomment-161393594 .

— Reply to this email directly or view it on GitHub.

SmithJK commented 8 years ago

Initial development pushed and merged.

SmithJK commented 8 years ago

Any identified issues or upgrade requests can be a separate issue.