GRIFFINCollaboration / GRSISort

A lean, mean, sorting machine.
MIT License
22 stars 54 forks source link

Walk correction implementation #252

Closed SmithJK closed 9 years ago

SmithJK commented 9 years ago

During the 2014 GRIFFIN runs, the triggering and timing as done with a leading-edge discrimination. This means we have a significant (up to 30 clock ticks) amount of walk for low energy gammas.

Time shifts in general could effect the event construction, the events that pass the "coincidence" window, and the addback algorithms. As long as we keep such a large time window for event construction, the effect of a walk correction there would be minimal. It's fairly easy to execute the correction before judging "coincident" events in MakeMatrices. The problem we're encountering right now is how to incorporate this into the creation of the addback hits.

Is there existing infrastructure that we could take advantage of to implement a walk correction? If there is no existing infrastructure, are there pre-existing ideas/plans for implementing this? I imagine this might be related to implementations of the CFD sub-timestamp correction, but I haven't located where that occurs in the code.

r3dunlop commented 9 years ago

I think this is where I have to head once I get other things under control. I was thinking about performing a banana cut on an energy vs timestamp difference. The add-back question is a good one though. We seem to be losing a bunch of information at this level by making the add-back hits. @pcbend, is there a reason we make the add back events in the analysis tree? Perhaps we should consider making them in something like a MakeMatrices script from the regular griffin_hits. This also leaves the door open for others to come up with their own add-back schemes in their own scripts. I think we could still make the standard “rough" add back in the analysis tree to check on things, but in my opinion, a final good version of add-back should be done later.

On Mar 17, 2015, at 6:51 PM, SmithJK notifications@github.com wrote:

During the 2014 GRIFFIN runs, the triggering and timing as done with a leading-edge discrimination. This means we have a significant (up to 30 clock ticks) amount of walk for low energy gammas.

Time shifts in general could effect the event construction, the events that pass the "coincidence" window, and the addback algorithms. As long as we keep such a large time window for event construction, the effect of a walk correction there would be minimal. It's fairly easy to execute the correction before judging "coincident" events in MakeMatrices. The problem we're encountering right now is how to incorporate this into the creation of the addback hits.

Is there existing infrastructure that we could take advantage of to implement a walk correction? If there is no existing infrastructure, are there pre-existing ideas/plans for implementing this? I imagine this might be related to implementations of the CFD sub-timestamp correction, but I haven't located where that occurs in the code.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252.

r3dunlop commented 9 years ago

@SmithJK if you want to go ahead and plug on with figuring out how to do these corrections be my guest. I'm finishing up with Midas File stuff right now and can definitely help when I'm done.

pcbend commented 9 years ago

To first order there isn't a whole lot we can do on the low energy walk front. You are right though, we can improve the add back by making sure the low energy stuff is accounted for.

The easiest way to do this, is as Ryan said, make a two gate; the time gate becomes a banana and everyone is accounted for. The only real issue here is we have to have the data already sorted into coincidence s to get the gate right. If we are doing a final analysis, this is a none issue as we can make it / addback at or leisure.

Now for a proper "real time" correction, to do things really properly we would have to get the correction from PSA done inside the daq. I don't see this happening anytime soon . A similar cheaper method would be to build up a bases of what average corrections are and have a sort of look up table to try and guess what the walk correction values are depending on input energy. The of course will depend on gain value and may depend on detector/digitizer effects. In other words I am not sure how reliable it would be. But if it works we can keep using straight gates.

Now why we are build them in the analysis tree. The simple answer is because we can. We are already looping over the hits so I don't think we lose anything by doing it live. Right now I believe the coincidence for add back is 100ns, which may be too small for low energy scatters.

The majority of scatters are going to be above 300kev. So, statistically we should be getting it right most of the time. This agrees with the Evans geant4 simulations, I presented the eff, add-back eff, and simulated curves in a group meeting some time ago, and they agree.

So I guess I am not sure what we are trying to accomplish. I can make a build add back function that takes a tcutg if we really need too. Perhaps we should start the conversation about what information we are losing... I am not aware of any but am interested what it is.

As a final thought here, from the beginning of the project we have worked trying to make things as general as possible. I have no issues make tools to allow people easier access to the data but I think we should be weary in try to ever provide a "final" solution. I am not sure if you guys remember but we originally wrote make matrices as a guide line with how to access the data not as a method to do the analysis for them.

On Mar 17, 2015 7:01 PM, "Ryan Dunlop" notifications@github.com wrote:

I think this is where I have to head once I get other things under control. I was thinking about performing a banana cut on an energy vs timestamp difference. The add-back question is a good one though. We seem to be losing a bunch of information at this level by making the add-back hits. @pcbend, is there a reason we make the add back events in the analysis tree? Perhaps we should consider making them in something like a MakeMatrices script from the regular griffin_hits. This also leaves the door open for others to come up with their own add-back schemes in their own scripts. I think we could still make the standard “rough" add back in the analysis tree to check on things, but in my opinion, a final good version of add-back should be done later.

On Mar 17, 2015, at 6:51 PM, SmithJK notifications@github.com wrote:

During the 2014 GRIFFIN runs, the triggering and timing as done with a leading-edge discrimination. This means we have a significant (up to 30 clock ticks) amount of walk for low energy gammas.

Time shifts in general could effect the event construction, the events that pass the "coincidence" window, and the addback algorithms. As long as we keep such a large time window for event construction, the effect of a walk correction there would be minimal. It's fairly easy to execute the correction before judging "coincident" events in MakeMatrices. The problem we're encountering right now is how to incorporate this into the creation of the addback hits.

Is there existing infrastructure that we could take advantage of to implement a walk correction? If there is no existing infrastructure, are there pre-existing ideas/plans for implementing this? I imagine this might be related to implementations of the CFD sub-timestamp correction, but I haven't located where that occurs in the code.

— Reply to this email directly or view it on GitHub < https://github.com/pcbend/GRSISort/issues/252>.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-82633731.

pcbend commented 9 years ago

Looking at the addback this morning, the time correlation is actually 200ns from the last hit. If we had a series of hits, each could be 200 ns apart(based on the hits TimeStamp value). This seemingly arbitrary value came from looking at the the difference between two times vs the difference between to energies (plot attached, x units here are in delta-1ns(I think, this is taken from the value stored in the grif-cfd word), y in delta-keV ).

This data was taken from 152Eu (run02400_000.mid), one of the eff runs at the end of the December beam times. Energy calibrations were done using Jenna's 46Ca.cal file.

How good this value is clearly open to some interpretation. Another way I can think of to account for the low energy walk in the addback is to allow different time gates depending on the change in energy, but before we do that one needs to explore what these different "pockets of coincidence" are more carefully. Thoughts and comments are most welcomed.

deltae_deltat

pcbend commented 9 years ago

for anyone interested this root file is on grsmid01:

/data4/tigress/PeterSteffen/scratch/analysisOUT.root

the script used to make it is:

/data4/tigress/PeterSteffen/scratch/AnalysisLoop.cxx

r3dunlop commented 9 years ago

Would we actually expect them to be 200 ns apart? I feel like they could be up to 200 ns apart but they shouldn’t be cumulative. The detectors think they see all of the events at the same time more or less and the original 200 ns time difference comes from the leading edge. I would expect that the we could have the wrong time stamp by up to an amount of 2*200 ns, but our time differences shouldn’t be any worse than 200 ns no?

On Mar 18, 2015, at 11:04 AM, pcbend notifications@github.com wrote:

Looking at the addback this morning, the time correlation is actually 200ns from the last hit. If we had a series of hits, each could be 200 ns apart(based on the hits TimeStamp value). This seemingly arbitrary value came from looking at the the difference between two times vs the difference between to energies (plot attached, x units here are in delta-1ns(I think, this is taken from the value stored in the grif-cfd word), y in delta-keV ).

This data was taken from 152Eu (run02400_000.mid), one of the eff runs at the end of the December beam times. Energy calibrations were done using Jenna's 46Ca.cal file.

How good this value is clearly open to some interpretation. Another way I can think of to account for the low energy walk in the addback is to allow different time gates depending on the change in energy, but before we do that one needs to explore what these different "pockets of coincidence" are more carefully. Thoughts and comments are most welcomed.

https://cloud.githubusercontent.com/assets/433187/6711312/9478f858-cd5c-11e4-8308-c9341c5ed4be.png — Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-83012805.

VinzenzBildstein commented 9 years ago

In my opinion there are two different things we need to do. The first one is to provide a quick and easy way to analyze runs while we're taking beam. MakeMatrices was written for that purpose. For this we should use a relatively wide straight time window to create coincidences and a relatively simple method for addback. That way it should work without major modifications for each experiment.

The second thing we need to do is provide some basis tools and example scripts/programs for people to actually do the analysis. For this it does make sense to worry about the walk and using banana gates. This is something you are getting into right now, and I think this shouldn't be based off of MakeMatrices, but rather a new program.

In the data from my PhD thesis we had separate particle and gamma-ray streams that were leading edge triggered. To correct for the walk and to calculate the width of the banana gates, I created 2D spectra with cuts on high energy gamma-ray/particle events and plotted the particle/gamma-ray energy vs. the time difference. Then I projected slices in the energy to get 1D time difference spectra at the different energies, fit those with gaussians and then used a fit of the mean of the gaussian to correct my timestamps, and a fit of the width of the gaussians to determine the energy dependence of the width of my banana gate. This can all be done via a single script running over one run, saving all the resulting parameters in a TEnv file. That TEnv file can then be read on subsequent runs to correct the timestamps and create banana gates.

SmithJK commented 9 years ago

Jenn's been the one specifically looking at this out here and she's done essentially what @VinzenzBildstein described: gated on high energy, created a 2D hist of time difference vs. energy of coincident gamma, sliced it up by energy, fit those with Gaussians, and fit the plot of mean values with a function to tell her how to correct the time.

Here is the plot before: jenn_walk_correction_before

and here it is after a walk correction: jenn_walk_correction_after

Reading all the dialogue so far, it seems like everyone is imagining some kind of process of:

  1. Create events and addback hits.
  2. Decide out how to account for walk.
  3. Recreate addback hits, accounting for walk.

Methods for fixing the walk could include: changing addback requirements from a straight time gate to a 2D time-energy banana gate, correcting the time based on some user input function, and utilizing the CFD algorithm in the firmware (I think I heard that Chris is ready to test this on the GRIF-16s, but don't quote me on that).

I think it is reasonable to provide some infrastructure for these corrections to occur, especially since the re-creation will utilize GRSISort.

pcbend commented 9 years ago

If Chris is implementing this on the cfd, than all this becomes redundant. We already have a spot for it in the fragment and than we will always have the correct time.

So, this is also essential the "look up" table I was saying we could create earlier. We due have to be careful with adding this into the base frame work as the correction values could easily change from data set to data set. The issue to me comes in it is not straight forward how to include this back into the analysis tree. We do have time to add TimeCoeff to the cal file. If we were to fit the time corrections for each (combined if they are similar) to the as a function of the energy, it would be fairly straight forward to apply this correction function to each time. The mock up to do this is already in the TChannel::Calibrate functions. If this something people are interested in, lets work on getting the corrections as a function of energy in terms of a polynomial(order doesn't matter), and I will make sure they get applied when as the analysis tree gets sorted.

On Wed, Mar 18, 2015 at 1:48 PM, SmithJK notifications@github.com wrote:

Jenn's been the one specifically looking at this out here and she's done essentially what @VinzenzBildstein https://github.com/VinzenzBildstein described: gated on high energy, created a 2D hist of time difference vs. energy of coincident gamma, sliced it up by energy, fit those with Gaussians, and fit the plot of mean values with a function to tell her how to correct the time.

Here is the plot before: [image: jenn_walk_correction_before] https://cloud.githubusercontent.com/assets/10604745/6715485/feb7fe5e-cd5b-11e4-8566-24a32cc56821.png

and here it is after a walk correction: [image: jenn_walk_correction_after] https://cloud.githubusercontent.com/assets/10604745/6715492/139d4cb6-cd5c-11e4-998f-6e6118f92f86.png

Reading all the dialogue so far, it seems like everyone is imagining some kind of process of:

  1. Create events and addback hits.
  2. Decide out how to account for walk.
  3. Recreate addback hits, accounting for walk.

Methods for fixing the walk could include: changing addback requirements from a straight time gate to a 2D time-energy banana gate, correcting the time based on some user input function, and utilizing the CFD algorithm in the firmware (I think I heard that Chris is ready to test this on the GRIF-16s, but don't quote me on that).

I think it is reasonable to provide some infrastructure for these corrections to occur, especially since the re-creation will utilize GRSISort.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-83087497.

jpore commented 9 years ago

When I did this correction, I used a function (a/(x+b))+c. To use a polynomial fit I would of had to go to sixth order, and it wasn't very well-behaved at high energies, which is not good since the correction there should basically be zero.

pcbend commented 9 years ago

We can make it not a polynomial, we just need to carefully describe how it works and implement the some restrictions on the code to require a set number of inputs in the calibration. This isn't a huge issue for me. I the example, I take it X is the calibrated energy. What are the typical values for a,b, and c?

On Wed, Mar 18, 2015 at 2:36 PM, Jennifer Pore notifications@github.com wrote:

When I did this correction, I used a function (a/(x+b))+c. To use a polynomial fit I would of had to go to sixth order, and it wasn't very well-behaved at high energies, which is not good since the correction there should basically be zero.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-83114245.

jpore commented 9 years ago

rough values from my fit (from memory) . . . a=3e02 b=2e01 c=5e-02

pcbend commented 9 years ago

OK, so I can add this in but I am hesitant to change the values stored in the TimeStamp variable. Can you remake the do this (energy vs time) for the values stored in the cfd variable instead of the TimeStamp variable?

On Wed, Mar 18, 2015 at 2:55 PM, Jennifer Pore notifications@github.com wrote:

rough values from my fit (from memory) . . . a=3e02 b=2e01 c=5e-02

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-83121437.

jpore commented 9 years ago

yes, I can do this and get back to you.

r3dunlop commented 9 years ago

Ok, so as this must be done for all current GRIFFIN analyses, I think it is safe to say that this is a priority. I was wondering if there was a way we could divvy up the work to get it done faster.

pcbend commented 9 years ago

if we have a formula, it is really just as simple as putting it into the TCannel:CalibarateCFD function. Than we check that we have the right number of coeff and "calibrate the time" at the same time we calibrate the energy. we already have everything set up for it. I just wanted to make sure the same formula that worked on the timestamps also works on the cfd.

On Thu, Mar 19, 2015 at 11:37 AM, Ryan Dunlop notifications@github.com wrote:

Ok, so as this is must be done for all current GRIFFIN analyses, I think it is safe to say that this is a priority. I was wondering if there was a way we could divvy up the work to get it done faster.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-83637212.

r3dunlop commented 9 years ago

So the goal as of right now is to make our own cfd from the time stamps right? Are these parameters the same for every channel? One could imagine a situation in which the leading edge comes at slightly different times from different preamps no? I honestly don't know since I haven't looked at this data myself. I'm sure even if it does, it isn't a large change.

I would assume in the future, the Calibrate CFD function would not care about the timestamp in the same way. So isn't this sort of a temporary calibration function?

I can also make a simple TCal which can set all of the channel parameter's CFD corrections for you and forces the write out of the correct number of parameters. If the functional form of the calibration is going to be different in different systems, this would be a good place to set that as well (in the cal file).

pcbend commented 9 years ago

CalibrateCFD is something I put in a long time ago to handle a similar problem. The plan right now is to use it to "calibrate" the cfd value as a function of the energy detected in a crystal. Yes it could be different for every preamp. Hopefully it will be similar enough for all 64, if it is not we will have to do an fit of the energy versus delta-cfd for each of the individual channel (i don't think this will be necessary). We are going to apply the corrected gain values to the data in a similar way as when we do an energy calibration. This gives the user the ability to add them in, adjust them or even remove them as they see fit and gives them an easy way to apply them back to the data (re-sort the analysis tree with the CfdCoeff in the calibration file.)

As long as we are doing this, these values CFDCoeff should be checked for each data set. My ultimate hope is if we are going to worry about this, we get a proper walk correction from psa in the data stream, which we can store in the ZC variable of the FragmentTree. This value, being derived directly from the waveform would be much more robust than what we are doing here. Once (if?) this happens, we can simple remove the cfdcoeff from the cal files and care on with the correct values.

I see this as being most useful for the Ge, as there rise time changes heavily as a function of energy. My also believe that because the crystal are similar size and gain values are also close to the same, we will be able to get by for one set of values for the entire array. Because of this, I think an automated thing top create the coefficients is over kill right now.

On Thu, Mar 19, 2015 at 12:02 PM, Ryan Dunlop notifications@github.com wrote:

So the goal as of right now is to make our own cfd from the time stamps right? Are these parameters the same for every channel? One could imagine a situation in which the leading edge comes at slightly different times from different preamps no? I honestly don't know since I haven't looked at this data myself. I'm sure even if it does, it isn't a large change.

I would assume in the future, the Calibrate CFD function would not care about the timestamp in the same way. So isn't this sort of a temporary calibration function?

I can also make a simple TCal which can set all of the channel parameter's CFD corrections for you and forces the write out of the correct number of parameters. If the functional form of the calibration is going to be different in different systems, this would be a good place to set that as well (in the cal file).

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-83643899.

r3dunlop commented 9 years ago

That’s fair. I was just going to make a really simple thing that you could say: SetParameters(blah); SetAllGriffinChannels(); WriteCalFile();

Just so we don’t have to copy/paste each channel. It shouldn’t take long and if these parameters do change per data set it will save time in the long run.

On Mar 19, 2015, at 12:20 PM, pcbend notifications@github.com wrote:

CalibrateCFD is something I put in a long time ago to handle a similar problem. The plan right now is to use it to "calibrate" the cfd value as a function of the energy detected in a crystal. Yes it could be different for every preamp. Hopefully it will be similar enough for all 64, if it is not we will have to do an fit of the energy versus delta-cfd for each of the individual channel (i don't think this will be necessary). We are going to apply the corrected gain values to the data in a similar way as when we do an energy calibration. This gives the user the ability to add them in, adjust them or even remove them as they see fit and gives them an easy way to apply them back to the data (re-sort the analysis tree with the CfdCoeff in the calibration file.)

As long as we are doing this, these values CFDCoeff should be checked for each data set. My ultimate hope is if we are going to worry about this, we get a proper walk correction from psa in the data stream, which we can store in the ZC variable of the FragmentTree. This value, being derived directly from the waveform would be much more robust than what we are doing here. Once (if?) this happens, we can simple remove the cfdcoeff from the cal files and care on with the correct values.

I see this as being most useful for the Ge, as there rise time changes heavily as a function of energy. My also believe that because the crystal are similar size and gain values are also close to the same, we will be able to get by for one set of values for the entire array. Because of this, I think an automated thing top create the coefficients is over kill right now.

On Thu, Mar 19, 2015 at 12:02 PM, Ryan Dunlop notifications@github.com wrote:

So the goal as of right now is to make our own cfd from the time stamps right? Are these parameters the same for every channel? One could imagine a situation in which the leading edge comes at slightly different times from different preamps no? I honestly don't know since I haven't looked at this data myself. I'm sure even if it does, it isn't a large change.

I would assume in the future, the Calibrate CFD function would not care about the timestamp in the same way. So isn't this sort of a temporary calibration function?

I can also make a simple TCal which can set all of the channel parameter's CFD corrections for you and forces the write out of the correct number of parameters. If the functional form of the calibration is going to be different in different systems, this would be a good place to set that as well (in the cal file).

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-83643899.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-83648760.

pcbend commented 9 years ago

I am not going to say not to do it. I just didn't want you to feel like you had to.

On Thu, Mar 19, 2015 at 12:28 PM, Ryan Dunlop notifications@github.com wrote:

That’s fair. I was just going to make a really simple thing that you could say: SetParameters(blah); SetAllGriffinChannels(); WriteCalFile();

Just so we don’t have to copy/paste each channel. It shouldn’t take long and if these parameters do change per data set it will save time in the long run.

On Mar 19, 2015, at 12:20 PM, pcbend notifications@github.com wrote:

CalibrateCFD is something I put in a long time ago to handle a similar problem. The plan right now is to use it to "calibrate" the cfd value as a function of the energy detected in a crystal. Yes it could be different for every preamp. Hopefully it will be similar enough for all 64, if it is not we will have to do an fit of the energy versus delta-cfd for each of the individual channel (i don't think this will be necessary). We are going to apply the corrected gain values to the data in a similar way as when we do an energy calibration. This gives the user the ability to add them in, adjust them or even remove them as they see fit and gives them an easy way to apply them back to the data (re-sort the analysis tree with the CfdCoeff in the calibration file.)

As long as we are doing this, these values CFDCoeff should be checked for each data set. My ultimate hope is if we are going to worry about this, we get a proper walk correction from psa in the data stream, which we can store in the ZC variable of the FragmentTree. This value, being derived directly from the waveform would be much more robust than what we are doing here. Once (if?) this happens, we can simple remove the cfdcoeff from the cal files and care on with the correct values.

I see this as being most useful for the Ge, as there rise time changes heavily as a function of energy. My also believe that because the crystal are similar size and gain values are also close to the same, we will be able to get by for one set of values for the entire array. Because of this, I think an automated thing top create the coefficients is over kill right now.

On Thu, Mar 19, 2015 at 12:02 PM, Ryan Dunlop notifications@github.com wrote:

So the goal as of right now is to make our own cfd from the time stamps right? Are these parameters the same for every channel? One could imagine a situation in which the leading edge comes at slightly different times from different preamps no? I honestly don't know since I haven't looked at this data myself. I'm sure even if it does, it isn't a large change.

I would assume in the future, the Calibrate CFD function would not care about the timestamp in the same way. So isn't this sort of a temporary calibration function?

I can also make a simple TCal which can set all of the channel parameter's CFD corrections for you and forces the write out of the correct number of parameters. If the functional form of the calibration is going to be different in different systems, this would be a good place to set that as well (in the cal file).

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-83643899.

— Reply to this email directly or view it on GitHub < https://github.com/pcbend/GRSISort/issues/252#issuecomment-83648760>.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-83651641.

r3dunlop commented 9 years ago

Ok. At the very least, it’s a place holder.

On Mar 19, 2015, at 12:31 PM, pcbend notifications@github.com wrote:

I am not going to say not to do it. I just didn't want you to feel like you had to.

On Thu, Mar 19, 2015 at 12:28 PM, Ryan Dunlop notifications@github.com wrote:

That’s fair. I was just going to make a really simple thing that you could say: SetParameters(blah); SetAllGriffinChannels(); WriteCalFile();

Just so we don’t have to copy/paste each channel. It shouldn’t take long and if these parameters do change per data set it will save time in the long run.

On Mar 19, 2015, at 12:20 PM, pcbend notifications@github.com wrote:

CalibrateCFD is something I put in a long time ago to handle a similar problem. The plan right now is to use it to "calibrate" the cfd value as a function of the energy detected in a crystal. Yes it could be different for every preamp. Hopefully it will be similar enough for all 64, if it is not we will have to do an fit of the energy versus delta-cfd for each of the individual channel (i don't think this will be necessary). We are going to apply the corrected gain values to the data in a similar way as when we do an energy calibration. This gives the user the ability to add them in, adjust them or even remove them as they see fit and gives them an easy way to apply them back to the data (re-sort the analysis tree with the CfdCoeff in the calibration file.)

As long as we are doing this, these values CFDCoeff should be checked for each data set. My ultimate hope is if we are going to worry about this, we get a proper walk correction from psa in the data stream, which we can store in the ZC variable of the FragmentTree. This value, being derived directly from the waveform would be much more robust than what we are doing here. Once (if?) this happens, we can simple remove the cfdcoeff from the cal files and care on with the correct values.

I see this as being most useful for the Ge, as there rise time changes heavily as a function of energy. My also believe that because the crystal are similar size and gain values are also close to the same, we will be able to get by for one set of values for the entire array. Because of this, I think an automated thing top create the coefficients is over kill right now.

On Thu, Mar 19, 2015 at 12:02 PM, Ryan Dunlop notifications@github.com wrote:

So the goal as of right now is to make our own cfd from the time stamps right? Are these parameters the same for every channel? One could imagine a situation in which the leading edge comes at slightly different times from different preamps no? I honestly don't know since I haven't looked at this data myself. I'm sure even if it does, it isn't a large change.

I would assume in the future, the Calibrate CFD function would not care about the timestamp in the same way. So isn't this sort of a temporary calibration function?

I can also make a simple TCal which can set all of the channel parameter's CFD corrections for you and forces the write out of the correct number of parameters. If the functional form of the calibration is going to be different in different systems, this would be a good place to set that as well (in the cal file).

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-83643899.

— Reply to this email directly or view it on GitHub < https://github.com/pcbend/GRSISort/issues/252#issuecomment-83648760>.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-83651641.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-83652212.

SmithJK commented 9 years ago

In her data analysis, @jpore saw no large difference in the shape of the walk between different crystals.

jpore commented 9 years ago

looking at the CFD variable for my data, it looks like it is just the TimeStampLow. To make the discussed correction you need to fit the time per coincident energy, below is a plot that shows the coincident energies from differences in the TimeStamp. coinceng_timestamps here, is the coincident energies from differences in the CFD. You can see that there is no distribution to fit. coinceng_cfd

just for fun, I applied the same correction that I got from fitting the timestamp to the CFD variable, so below are the coincident energies for the raw and corrected CFD variable. coincenguncorr_cfd coincengcorr_cfd the time window does not get as narrow as it does when we do this correction for timestamps.

pcbend commented 9 years ago

The time values for the cfd are in 10ns units, which make them appear similar to the timestamplow value. I am a bit worried about your energy vs delta cfd time spectra. Attached is energy vs delta cfd and energy vs delta timestamp spectra for run02400_000 for any two gamma rays that fall in a 2 usec time window. (the x-axis on the graphs are in tens of ns)

How did you make your gamma ray energy vs delta cfd spectra above?

timediff

jpore commented 9 years ago

hmm, I probably did not call the CFD variable correctly. The plot I made is of . . coincEng->Fill(tempFrag.Cfd[0] - myFrag.Cfd[0] ,tempFrag.GetEnergy()); using a hacked version of MakeTimeDiffSpec, where myFrag is the first event and tempFrag is the second event.

pcbend commented 9 years ago

The call looks right to me. What run number are you using?

On Thu, Mar 19, 2015 at 6:55 PM, Jennifer Pore notifications@github.com wrote:

hmm, I probably did not call the CFD variable correctly. The plot I made is of . . coincEng->Fill(tempFrag.Cfd[0] - myFrag.Cfd[0] ,tempFrag.GetEnergy()); using a hacked version of MakeTimeDiffSpec, where myFrag is the first event and tempFrag is the second event.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-83792084.

jpore commented 9 years ago

grsmid01:/data1/griffin/47K/run02397_000.mid 60Co run. I have a condition built in that the prompt gamma ray has to be the 1332, to avoid the time walk in the prompt, and then I look at all the secondary gamma rays that are in coincidence with it.

jpore commented 9 years ago

The file is on grsmid00, not grsmid01 ----- Original Message ----- From: pcbend notifications@github.com To: pcbend/GRSISort GRSISort@noreply.github.com Cc: Jennifer Pore jpore@sfu.ca Sent: Thu, 19 Mar 2015 15:59:45 -0700 (PDT) Subject: Re: [GRSISort] Walk correction implementation (#252)

The call looks right to me. What run number are you using?

On Thu, Mar 19, 2015 at 6:55 PM, Jennifer Pore notifications@github.com wrote:

hmm, I probably did not call the CFD variable correctly. The plot I made is of . . coincEng->Fill(tempFrag.Cfd[0] - myFrag.Cfd[0] ,tempFrag.GetEnergy()); using a hacked version of MakeTimeDiffSpec, where myFrag is the first event and tempFrag is the second event.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-83792084.


Reply to this email directly or view it on GitHub: https://github.com/pcbend/GRSISort/issues/252#issuecomment-83794554

pcbend commented 9 years ago

OK, I have repeated what Jenn did earlier. The cfd value is more defunct than I thought it was. A good number of what are clearly good timestamps have crap cfds. Attached is the same plot as I sent before but for run020397 (60Co). I have also required at least one gamma in to be ~1332 keV, with this condition meet I plot the energy of each gamma versus the timediff and cfddiff for each accordingly. The fact that there are so few counts in the cfd compared to the timestamp one means that it is not very reliable. Chances are we as group have know the cfd is crap for some time... sometime information travels down the streams a bit slowly.

timediff2

I am out for the night, but tomorrow morning I'll implement the time calibration using Jenn's formula from earlier, I am going to store this corrected value in the CFD value, as the value in there now is clearly not doing much.

r3dunlop commented 9 years ago

So the plan is put a correction in at the analysis level that just rewrites the cfd value based on the timestamp? I’m fine with this. We need to be aware to remove this from the code at some point though.

On Mar 19, 2015, at 8:04 PM, pcbend notifications@github.com wrote:

OK, I have repeated what Jenn did earlier. The cfd value is more defunct than I thought it was. A good number of what are clearly good timestamps have crap cfds. Attached is the same plot as I sent before but for run020397 (60Co). I have also required at least one gamma in to be ~1332 keV, with this condition meet I plot the energy of each gamma versus the timediff and cfddiff for each accordingly. The fact that there are so few counts in the cfd compared to the timestamp one means that it is not very reliable. Chances are we as group have know the cfd is crap for some time... sometime information travels down the streams a bit slowly.

https://cloud.githubusercontent.com/assets/433187/6743645/fd200aec-ce72-11e4-8606-d9e414c005f3.png I am out for the night, but tomorrow morning I'll implement the time calibration using Jenn's formula from earlier, I am going to store this corrected value in the CFD value, as the value in there now is clearly not doing much.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-83815235.

pcbend commented 9 years ago

That's the plan. Yes we do. I ll put a big note in the code and we should probably leave this open until the CFD value is working. On Mar 19, 2015 8:09 PM, "Ryan Dunlop" notifications@github.com wrote:

So the plan is put a correction in at the analysis level that just rewrites the cfd value based on the timestamp? I’m fine with this. We need to be aware to remove this from the code at some point though.

On Mar 19, 2015, at 8:04 PM, pcbend notifications@github.com wrote:

OK, I have repeated what Jenn did earlier. The cfd value is more defunct than I thought it was. A good number of what are clearly good timestamps have crap cfds. Attached is the same plot as I sent before but for run020397 (60Co). I have also required at least one gamma in to be ~1332 keV, with this condition meet I plot the energy of each gamma versus the timediff and cfddiff for each accordingly. The fact that there are so few counts in the cfd compared to the timestamp one means that it is not very reliable. Chances are we as group have know the cfd is crap for some time... sometime information travels down the streams a bit slowly.

< https://cloud.githubusercontent.com/assets/433187/6743645/fd200aec-ce72-11e4-8606-d9e414c005f3.png

I am out for the night, but tomorrow morning I'll implement the time calibration using Jenn's formula from earlier, I am going to store this corrected value in the CFD value, as the value in there now is clearly not doing much.

— Reply to this email directly or view it on GitHub < https://github.com/pcbend/GRSISort/issues/252#issuecomment-83815235>.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-83815814.

pcbend commented 9 years ago

Ok, plans have changed a tad. With the insight of morning thought, we are not storing the corrected timestamp in the cfd value. Instead have changed the function TFragment::GetTimeStamp() to return a double instread of a long. This function now Operates very similar to TFragment::GetEnergy(), (i.e if the correct things have been defined in the cal file, it returns the corrected energy.)

The cal file now has a new input option called Walk: Put values into either the Walk: (or a variable called TimeCoeff which does the same thing) will allow the TFragment::GetTimeStamp() to return a "walk corrected" timestamp value. The addback functions used in TGriffin::BuildHits() already use this function, so any AnalysisTrees made from this point should gain in the low energy eff.

Now how we are doing this. Using the formula from Jenn earlier, I was able to get a moderate correction:

walk1

but nothing ground break... I am not sure where the error was and instead of fighting it, I change the formula we are using. The correction is now:

deltaT = A + B * Eng^C

an example of this fit:

walk2

and the "results: using these paramters:

walk3

as one can see from my fit, there is some room for improvement and I encourage others to try and fit it better using the formula above. The cal file I used to make this can currently be found:

grsmid01:/data4/griffin/scratch/46Ca_walk.cal

an example of a TChannel in this calfile:

GRG16GN00A { Name: GRG16GN00A Number: 62 Address: 0x0000110d Digitizer: EngCoeff: 0.70165 0.31785 Integration: 125 WALK: -16.02 91.16 -0.2462 // uses formula [0]+[1]*E^[2] to calculate detla t ENGChi2: 0.000000 EffCoeff: EFFChi2: 0.000000 }

Let me know if there are any questions.

pcbend commented 9 years ago

i realized i closed this yesterday, not on purpose. I think we can close this now, however I want people to see this post first.

r3dunlop commented 9 years ago

Is that a 60Co source you are using for that data? Is there a script that makes these plots in the data currently? I’d like to give it a shot.

On Mar 20, 2015, at 11:55 AM, pcbend notifications@github.com wrote:

Ok, plans have changed a tad. With the insight of morning thought, we are not storing the corrected timestamp in the cfd value. Instead have changed the function TFragment::GetTimeStamp() to return a double instread of a long. This function now Operates very similar to TFragment::GetEnergy(), (i.e if the correct things have been defined in the cal file, it returns the corrected energy.)

The cal file now has a new input option called Walk: Put values into either the Walk: (or a variable called TimeCoeff which does the same thing) will allow the TFragment::GetTimeStamp() to return a "walk corrected" timestamp value. The addback functions used in TGriffin::BuildHits() already use this function, so any AnalysisTrees made from this point should gain in the low energy eff.

Now how we are doing this. Using the formula from Jenn earlier, I was able to get a moderate correction:

https://cloud.githubusercontent.com/assets/433187/6755016/b2b24b3a-cef6-11e4-9295-dd1318c38660.png but nothing ground break... I am not sure where the error was and instead of fighting it, I change the formula we are using. The correction is now:

deltaT = A + B * Eng^C

an example of this fit:

https://cloud.githubusercontent.com/assets/433187/6755059/eea13cd2-cef6-11e4-86e8-156ee5145ee1.png and the "results: using these paramters:

https://cloud.githubusercontent.com/assets/433187/6755071/04c4a21a-cef7-11e4-9998-461b88f0b1d8.png as one can see from my fit, there is some room for improvement and I encourage others to try and fit it better using the formula above. The cal file I used to make this can currently be found:

grsmid01:/data4/griffin/scratch/46Ca_walk.cal

an example of a TChannel in this calfile:

GRG16GN00A { Name: GRG16GN00A Number: 62 Address: 0x0000110d Digitizer: EngCoeff: 0.70165 0.31785 Integration: 125 WALK: -16.02 91.16 -0.2462 // uses formula [0]+[1]*E^[2] to calculate detla t ENGChi2: 0.000000 EffCoeff: EFFChi2: 0.000000 }

Let me know if there are any questions.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-84054842.

pcbend commented 9 years ago

yes, everything is in grsmid01: /data4/griffin/scratch

this was done for run number was 02397 (The 60Co run Jenn was using.) , there is a fragmenttree in the directory.

The script I made is also in the directory; timeloop.cxx It needs to be compiled as oppose to running in as a root script.

to compile: g++ timeloop.cxx grsi-config --cflags --libs --root to run: ./a.out fragmentXXXXX.root

right, not the TChannels are hard coded to read 46Ca_walk.cal (also in the directory), the program takes a fragment tree given on the command line and produces a file called junk.root (also hard coded.) The names of the hist in junk.root are a bit weird due to a spelling mistake but the titles are right.

On Fri, Mar 20, 2015 at 1:36 PM, Ryan Dunlop notifications@github.com wrote:

Is that a 60Co source you are using for that data? Is there a script that makes these plots in the data currently? I’d like to give it a shot.

On Mar 20, 2015, at 11:55 AM, pcbend notifications@github.com wrote:

Ok, plans have changed a tad. With the insight of morning thought, we are not storing the corrected timestamp in the cfd value. Instead have changed the function TFragment::GetTimeStamp() to return a double instread of a long. This function now Operates very similar to TFragment::GetEnergy(), (i.e if the correct things have been defined in the cal file, it returns the corrected energy.)

The cal file now has a new input option called Walk: Put values into either the Walk: (or a variable called TimeCoeff which does the same thing) will allow the TFragment::GetTimeStamp() to return a "walk corrected" timestamp value. The addback functions used in TGriffin::BuildHits() already use this function, so any AnalysisTrees made from this point should gain in the low energy eff.

Now how we are doing this. Using the formula from Jenn earlier, I was able to get a moderate correction:

< https://cloud.githubusercontent.com/assets/433187/6755016/b2b24b3a-cef6-11e4-9295-dd1318c38660.png

but nothing ground break... I am not sure where the error was and instead of fighting it, I change the formula we are using. The correction is now:

deltaT = A + B * Eng^C

an example of this fit:

< https://cloud.githubusercontent.com/assets/433187/6755059/eea13cd2-cef6-11e4-86e8-156ee5145ee1.png

and the "results: using these paramters:

< https://cloud.githubusercontent.com/assets/433187/6755071/04c4a21a-cef7-11e4-9998-461b88f0b1d8.png

as one can see from my fit, there is some room for improvement and I encourage others to try and fit it better using the formula above. The cal file I used to make this can currently be found:

grsmid01:/data4/griffin/scratch/46Ca_walk.cal

an example of a TChannel in this calfile:

GRG16GN00A { Name: GRG16GN00A Number: 62 Address: 0x0000110d Digitizer: EngCoeff: 0.70165 0.31785 Integration: 125 WALK: -16.02 91.16 -0.2462 // uses formula [0]+[1]*E^[2] to calculate detla t ENGChi2: 0.000000 EffCoeff: EFFChi2: 0.000000 }

Let me know if there are any questions.

— Reply to this email directly or view it on GitHub < https://github.com/pcbend/GRSISort/issues/252#issuecomment-84054842>.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-84082875.

jpore commented 9 years ago

Okay, so I have been spending a little bit of time on this trying to reproduce peter's results so that we can close this discussion. Peter, when you fit your function, is that the time_diff vs. energy matrix that you fit? or did you fit the energy slices? When I use your function on the energy slices I do not get a very good correction.

pcbend commented 9 years ago

The matrix I fit was the time diff on the y-axis and the energy on the x-axis. I than fit the matrix, (just the region I showed). This should give a better correction than fitting slices as the sample size(points to fit to) is larger.

On Mon, Mar 30, 2015 at 5:54 PM, Jennifer Pore notifications@github.com wrote:

Okay, so I have been spending a little bit of time on this trying to reproduce peter's results so that we can close this discussion. Peter, when you fit your function, is that the time_diff vs. energy matrix that you fit? or did you fit the energy slices? When I use your function on the energy slices I do not get a very good correction.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-87842976.

r3dunlop commented 9 years ago

I'm going to be using timeloop now so I am going to add it to utils. That way compiling and running is a little less painful.

pcbend commented 9 years ago

I am not sure I understand. If the walk coeff are in the cal file, simply using the function GetTime() for a fragment should fix it now...

On Thu, Apr 2, 2015 at 10:18 AM, Ryan Dunlop notifications@github.com wrote:

I'm going to be using timeloop now so I am going to add it to utils. That way compiling and running is a little less painful.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-88923618.

r3dunlop commented 9 years ago

Oh, I was going to make sure these corrections are the same in my data.

On Apr 2, 2015, at 10:56 AM, pcbend notifications@github.com wrote:

I am not sure I understand. If the walk coeff are in the cal file, simply using the function GetTime() for a fragment should fix it now...

On Thu, Apr 2, 2015 at 10:18 AM, Ryan Dunlop notifications@github.com wrote:

I'm going to be using timeloop now so I am going to add it to utils. That way compiling and running is a little less painful.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-88923618.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-88935750.

r3dunlop commented 9 years ago

I can also not move it. I’m fine with that.

On Apr 2, 2015, at 10:56 AM, pcbend notifications@github.com wrote:

I am not sure I understand. If the walk coeff are in the cal file, simply using the function GetTime() for a fragment should fix it now...

On Thu, Apr 2, 2015 at 10:18 AM, Ryan Dunlop notifications@github.com wrote:

I'm going to be using timeloop now so I am going to add it to utils. That way compiling and running is a little less painful.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-88923618.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-88935750.

pcbend commented 9 years ago

The fit might have to be(should be) redone for each data set, the same as an energy calibration but the method should be sound. I guess we should write a util that does the fit.

On Thu, Apr 2, 2015 at 11:07 AM, Ryan Dunlop notifications@github.com wrote:

I can also not move it. I’m fine with that.

On Apr 2, 2015, at 10:56 AM, pcbend notifications@github.com wrote:

I am not sure I understand. If the walk coeff are in the cal file, simply using the function GetTime() for a fragment should fix it now...

On Thu, Apr 2, 2015 at 10:18 AM, Ryan Dunlop notifications@github.com wrote:

I'm going to be using timeloop now so I am going to add it to utils. That way compiling and running is a little less painful.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-88923618.

— Reply to this email directly or view it on GitHub < https://github.com/pcbend/GRSISort/issues/252#issuecomment-88935750>.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-88939530.

r3dunlop commented 9 years ago

So as it stands, we have to go through and type a WALK in for every channel in the cal file? For reference here is my fit: screenshot 2015-04-02 15 45 10 screenshot 2015-04-02 15 48 14

pcbend commented 9 years ago

yes, but like everything in the calfile, it is case insensitive. I am a bit surprised who different these values are from when I did it for Jenn and Jenna's data. I suppose that reassures the fact that we need to fit this for every data set.

On Thu, Apr 2, 2015 at 3:54 PM, Ryan Dunlop notifications@github.com wrote:

So as it stands, we have to go through and type a WALK in for every channel in the cal file? For reference here is my fit: [image: screenshot 2015-04-02 15 45 10] https://cloud.githubusercontent.com/assets/5428160/6972265/875a0752-d950-11e4-9335-9be7e4906414.png [image: screenshot 2015-04-02 15 48 14] https://cloud.githubusercontent.com/assets/5428160/6972267/88937a2c-d950-11e4-8213-7502d08e1aa1.png

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-89025155.

r3dunlop commented 9 years ago

I'm adding a SetWalk function to TChannel....Will be quicker than typing this all out....especially if I screw up....

r3dunlop commented 9 years ago

Or can I just be lazy and set Time coefficients instead?

pcbend commented 9 years ago

they do they same thing however the name is no legacy and not descriptive.

On Thu, Apr 2, 2015 at 4:16 PM, Ryan Dunlop notifications@github.com wrote:

Or can I just be lazy and set Time coefficients instead?

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-89032294.

r3dunlop commented 9 years ago

Ok well it has taken me about 5 minutes but I have written something up that will quickly change all channels and output a TIMECoefficient to the cal file. But it will only print this if the size of the coefficient vector is > 0. I can make this print the word WALK instead, but I'd rather not hack something like that into TChannel for the handful of experiments that will use it. One can also do a replace all on the TIMECofficient once it is done. I'll post instructions before the end of the week.

pcbend commented 9 years ago

I think all experiments will use this until something is implemented in the daq, which may never happen.

On Thu, Apr 2, 2015 at 4:36 PM, Ryan Dunlop notifications@github.com wrote:

Ok well it has taken me about 5 minutes but I have written something up that will quickly change all channels and output a TIMECoefficient to the cal file. But it will only print this if the size of the coefficient vector is > 0. I can make this print the word WALK instead, but I'd rather not hack something like that into TChannel for the handful of experiments that will use it. One can also do a replace all on the TIMECofficient once it is done. I'll post instructions before the end of the week.

— Reply to this email directly or view it on GitHub https://github.com/pcbend/GRSISort/issues/252#issuecomment-89038722.

r3dunlop commented 9 years ago

Input: screenshot 2015-04-02 16 43 32 screenshot 2015-04-02 16 43 51

The line tc->WriteToAllChannels("GR") just ensures that you are only writing to channels with a mnemonic that begins with GR (and not sceptar). You can write as much of the mnemonic you want there and it will make the writing more and more specific. All TCal's behave this way. You will see a bunch of output telling you what it is writing to each channel.

Resultant cal file: screenshot 2015-04-02 16 46 59