cta-observatory / ctapipe

Low-level data processing pipeline software for CTAO or similar arrays of Imaging Atmospheric Cherenkov Telescopes
https://ctapipe.readthedocs.org
BSD 3-Clause "New" or "Revised" License
64 stars 269 forks source link

Implement extraction/cleaning method used in MARS for SumTrigger / EventDisplay analysis #1852

Open maxnoe opened 2 years ago

maxnoe commented 2 years ago

Please describe the use case that requires this feature.

The combined cleaning and extraction method is described in these papers:

and this PHD thesis:

https://mediatum.ub.tum.de/doc/1617483/1617483.pdf (section 4.2.1)

It used by the EventDisplay analysis, code here: https://github.com/Eventdisplay/Eventdisplay/blob/31f162f5b24c6cec4f5de7cd88a8c2aff82e19fc/src/VImageCleaning.cpp

And also by MAGIC Pulsar analyses (as described in the above PHD thesis, known as the MaTaJu cleaning (Ju for @jsitarek ).

I have to say: the description from the initial paper is not very clear. I will collect questions about the approach here in the coming days.

GernotMaier commented 2 years ago

Let me point out one huge advantage of Maxim's optimized next-neighour cleaning: no cut optimisation is necessary as cuts are choose automatically for a fixed 'fake image probability'. This works very well for all camera types and almost removes the differences between the camera types for the analysis.

From tests ongoing tests in VERITAS, we see an improved low-energy response of the optimized next-neighour cleaning method.

I think @kpfrang worked at some point in a possible implementation in ctapipe, maybe he can comment on it.

kpfrang commented 2 years ago

Yes, 3 years ago I was working on the optimized next-neighour cleaning:

https://github.com/kpfrang/ctapipe/blob/003a5d1f534ff4082e47d0f284b639b7b24a95ce/ctapipe/image/time_next_neighbor_cleaning.py

If I remember correctly, it still had some bug. The real fake image probability for the simulated background images was significantly higher than what I fixed by the parameter. I never had the time to understand why.

kosack commented 2 years ago

Let me point out one huge advantage of Maxim's optimized next-neighour cleaning: no cut optimisation is necessary as cuts are choose automatically for a fixed 'fake image probability'. This works very well for all camera types and almost removes the differences between the camera types for the analysis.

We could simply make a cut-optimization script that does the same for standard picture/boundary cleaning (or perhaps more generally picture/boundary defined in pedestal sigma units), right? That is independent from the method.

maxnoe commented 2 years ago

Which is what we basically already have in the DL1 benchmark notebooks

kosack commented 2 years ago

Right now we optimize cuts using signal/noise criteria for single pixels, but false positive images would also be possible as a criteria for any cleaning method (in fact it would be better, if a bit more involved and requiring a bit more input data than what we do now).

We also had even tried in the past using mono-reconstruction resolution as a cut optimization criterion: minimize the difference between the hillas axis after cleaning and the true point-of-origin. In the end there are many ways to do it, and I don't think it should be reliant on the method. Of course, fewer parameters is always nice

maxnoe commented 1 year ago

There was a bachelor thesis in Dortmund picking up the work from @kpfrang by @lucaMF

PR is here but needs updates: https://github.com/cta-observatory/ctapipe/pull/1857/