Open jingqiangye opened 5 years ago
Lowering s2 width requirement should be a better idea. Several factors impacting s1 width:
In XENON1T point 3 is the limiting factor, but SanDiX sealed TPC is 20 times smaller so the limiting factor is triplet lifetime, which is around 24 +- 1 ns. We may lower the threshold as 40 ns, it may sacrifice some s1 detection but we focus on single electron s2s so who cares:)
S2 (S1) width requirement can't be too much, otherwise some large s1 will be misidentified as S2, see https://github.com/darkmatter-ucsd/SanDP/pull/14#issuecomment-499595860
I took a look at the s2 width and peak identification threshold, and found that it's really not easy to identify very small s2, say less than 50 PE. That's could be why after updating the processor in https://github.com/darkmatter-ucsd/SanDP/pull/14#issuecomment-498863604, we see more s2 around 50 PE, but didn't lower the s2 lower limit.
Currently S2 width requirement is set as 100 samples (400 ns) above threshold, see config file. And the baseline sigma after amplifier is usually 10 mV, see my note. Lets say the s2 signal is an exact rectangle of these width and height, then from this line, assuming average PMT gain is 3e6 e, we can easily compute the 'minimal' s2 size:
0.03 100 4.9932e8 / 3e6 / 10 = 49 PE
This is around the smallest s2 we got. Sometimes the threshold is lower, and the data is not always above threshold and s2 size will be slightly smaller than that. If the calculation is correction, then to lower S2 lower limit the only way is to either lower threshold or s2 width. Thoughts?