LSSTDESC / Twinkles

10 years. 6 filters. 1 tiny patch of sky. Thousands of time-variable cosmological distance probes.
MIT License
13 stars 12 forks source link

Calibration systematics studies in Twinkles ? #415

Open rbiswas4 opened 7 years ago

rbiswas4 commented 7 years ago

From the SN meeting, at which everyone should have stayed back for :) a discussion started about how we could study the impact of photometric calibration in supernova cosmology through Twinkles. This is one of the biggest problems in SN cosmology, and most SN cosmologists would call this the largest source of 'systematic errors' in current analyses as demonstrated in those analyses.

The origin of the problem is in the reference catalog that is used for calibration. In real life, this is a catalog of astrophysical point sources of specific classes (specific white dwarfs) with extremely good spectrophotometric measurement and the class of the source (people like some very specific kinds of sources and maybe @wmwv can give us more background on why one can't just measure the spectra to death rather than mixing measurements with theoretical understanding). Currently, I believe we are putting all of the stars in the simulation in the truth catalog, and so we are not seeing this effect.

The discussion (with @wmwv, @djreiss) ended with some suggestion for our Twinkles current Run 3, where we could emulate this by redoing the stuff from the DM pipeline by sub selecting stars in the reference catalog to smaller numbers and try to keep the magnitude distributions as expected for calibration stars. In future runs we could actually put in some biases if we want to study this in greater detail

wmwv commented 7 years ago

Such explorations can come after pixel-level analyses and so could be carried out without significant computational effort.

We should think about bookkeeping. Calibration information is associated with the calexp dataset in the Butler. We don't really want full copies of the processed images for each of the difference reference sets. We just need to do catalog-level operations.

drphilmarshall commented 7 years ago

I'm OK with us presenting results from the analysis of a somewhat ideally calibrated dataset, but it would be good to have a sense of just how idealistic it is. If our main results are going to be about the "error model" of the Twinkles data, that we can use to generate catalog-level simulations at much larger scale, we want that error model to be as plausible as possible, so that we don't end up over or underestimating our final cosmological precision and accuracy too much. Is our current calibration procedure too optimistic to be useful?

wmwv commented 7 years ago

Calibration uncertainties tend to be different than most other because they are essentially all about the covariance. But that makes them difficult to simulate in a single run, because you will just get some fixed offset. All SN Ia cosmological analyses in the past 10 years have done a detailed simulation of the calibration uncertainties and the resulting effect on measurements of w.

I suggest we take the target goals for the SRD, and randomly sample within them. The question then becomes how to sample? I suggest [By "filter" below I mean system transmission function associated with the given filter.]

  1. The reference catalog should have associated errors and be randomly resampled within those errors. We may already do this.
  2. Simulate a color-dependent calibration, such that $\Delta g$ error is a function of $r-i$ color. We could do this by having an input reference calibration catalog where a series of different $\Delta g$ shift is introduced as a function of $r-i$ color.
  3. Simulate relative filter-to-filter errors ("absolute color" errors): E.g., there is some unknown offset between g and r in the calibration to AB. There is a 6 by 6 matrix that describes the covariance between the calibration of the 6 filters. This could be implemented as a post-catalog manipulation of just the 6 absolute calibration numbers of the filters.

Doing 1 or 2 would involve making sure that updating the calexp_md fluxmag0 for an image based on its already-processed catalog was easy. Or, even better, add an additional simulation wrapper level that preserves the re-entrability of the data on disk.

I think we should not simulate calibration errors as a function of airmass or atmospheric conditions.

It's unfortunately somewhat beyond scope for the Twinkles team to generate an independent reasonable estimate of the final calibration uncertainties for LSST. Calibration is hard. SDSS was ground-breaking in doing ~2%. DES sounds like it's doing well, and that makes me optimistic that LSST can as well.