Open dneise opened 8 years ago
I believe, not all cameras use this high-low-gain channel strategy. (This assumptions needs to be checked! Reference needed!)
So from the CTA array point of view, it might actually be advantageous to merge the useful information stored in the low and high gain channels at an early stage, since it makes the LST output more similar to other cameras outputs.
By calibrating the raw-data it is usually converted from 16bit integers to 32bit floats, since the offsets, which are subtracted are fractional numbers. So DRS4 calibration in general increases the data size by a factor of 2.
By merging two timelines into one, the amount of data is reduced by a factor of 2. The resolution of a 32bit float is so much higher than that of a 16bit integer, that the useful information from both gain_types could be stored in a single timeline without information loss.
However, merging both timelines is not a trivial thing to do, and I wonder what papers might have already been written about this topic.
I think this matter is possible to research without any real data, but having real data certainly helps. So in order to really do useful work here, one would need actual data, with pulses looking similar to real light pulses (regarding the shape and temporal behaviour), with varying but know charge.
The task would then include, to merge both gain_types somehow and show, that the estimated charge on the merged time series does correlate somehow well, with the known charge of the injected pulses.
As mentioned above, generally the rawdata from the ADCs on a dragon board (16bit integer) is up-converted to 32bit floats as soon as the offsets are subtracted.
When keeping in mind that there is a certain noise level, which cannot be overcome, it might be totally crazy to treat the offsets as floats. Say the residual noise after the most careful DRS4 offset subtraction is still 3.5 ADC counts (or LSB as the engineers say). Why would one need to know the offsets to a fraction of an LSB? This does not really help reducing the noise any further.
Also keep in mind the ADC on a dragon AD9637 only has a resolution of 12 bits.
So of the 16bit integer we use to store, transmit, and work one those ADC words, 4 bits are always free.
Now I am wondering, what if we could actually merge the useful information of both gain channels (see comment above this one), but into 16bit integers instead of 32bit floats? Maybe the additional resolution of a factor of 16 is enough to represent the measurements of both channels without loss of relevant resolution.
Or let me phrase it shorter:
Maybe the useful information of the two 12bit ADCs we have, can without loss of information be represented in a single 16bit integer. So we create a pseudo-16(or less)bit-ADC from the two gain channels.
As long as we do a good job, this only helps: People can think of an LST channel as a voltage measured by a 2..5GHz ADC with 16-bit resolution. Not two channels with 12bit resolution each, of which the low gain channel is only of limited interest...
I think this is called diversity combining
in signal processing:
https://en.wikipedia.org/wiki/Diversity_combining
One should find a lot of papers on this topic and maybe something usefull for us.
I think we were once asked to think about data reduction as well. Maybe we can collect ideas here ... or generally discuss a bit? What data reduction strategies are around.