We have yet to give much thought to how the injection simulations will be integrated into our pipeline.
These are useful as they test the recovery efficiency of the pipeline to transients with different brightness, and are essential in setting limits or rates based on (non-)detections.
Here are a few things to think about:
Where exactly (at what stage) do we insert the simulated sources? Is it at the new image? the subtraction?
What do we use to build up the injections? We will likely want the PSF and ZP of the image, but maybe other things?
Do we plan to inject the sources into random spots on the image? Is there a plan to avoid bright stars or to put all sources close to galaxies?
Is there a plan to inject sources into the same sky position over different images, to build up fake objects?
How do we store the simulation parameters that go with the data products, and make sure we don't get confused about what is real?
Do we intend to blind the human "scanners" to which events are simulated and which are real?
Are the simulation parameters going to affect the provenance of the data products?
How many injections do we need to get good statistics on the different parameters of injected sources? What is the parameter space (brightness, time scale, etc.)?
Is there an efficient way to inject many simulations into a single image and detect all of them (e.g., by injecting into the subtraction image, or by simply making lots of injections on one image, or by doing a separate subtraction just for the simulated sources and adding it later to the image)? Do we even need something like that?
We have yet to give much thought to how the injection simulations will be integrated into our pipeline. These are useful as they test the recovery efficiency of the pipeline to transients with different brightness, and are essential in setting limits or rates based on (non-)detections.
Here are a few things to think about: