LSSTDESC / SprintDaySLAC2023

Repository to host Sprints at the SLAC July 2023 Collaboration Meeting
Creative Commons Zero v1.0 Universal
1 stars 0 forks source link

[SPRINT] metrics for survey uniformity and photometric calibration #18

Open humnaawan opened 1 year ago

humnaawan commented 1 year ago

metrics for survey uniformity and photometric calibration

We'd like to code up metrics for survey uniformity and photometric calibration to allow evaluation of observing strategy at key time intervals.

Contacts: @beckermr, @ erykoff Time: TBA, depending on who's interested Main communication channel: #desc-wfd-uniformity-and-rolling In-person/Virtual/Hybrid: hybrid Zoom room (if applicable): TBA

Goals and deliverable

Reach goals:

Stretch goals

Resources and skills needed

Interest in the topic and/or metric development (in MAF, metric analysis framework), coding (python)

Detailed description

There's been quite a bit of work ongoing to understand (quantitively) the impacts of observing strategy choices on survey uniformity and photometric calibration - and there are some concrete ideas re how to evaluate these (at least to first order). See the discussions' notes doc for further details.

ixkael commented 1 year ago

Adding the notes from the non-uniformity session: https://docs.google.com/document/d/1E9mtGsLvoj1ksXER1Ot8z5gwS0ryywfwqSIppbJ870A/edit (see the very last point about 3x2pt forecasts)

beckermr commented 1 year ago

Martin White and co have a paper on how to propagate n(z) variations into the analysis. This might be interesting to use for forecasts and could make a nice sprint.

aimalz commented 1 year ago

I should have left a comment on this when assigning myself, but @humnaawan suggested a sprint in that session to explore the feasibility of an emulator of photo-z metrics from OS parameters over the sky, previously investigated by @MelissaGraham but possibly worth revisiting now that @hangqianjun has developed some perhaps game-changing RAIL tooling. I may also attempt to resume work to incorporate TheLastMetric into a MAF metric, originally started with @patricialarsen at the last remote Sprint Day (sorry for dropping this!), now that @bscot has streamlined the code.

rmandelb commented 1 year ago

To clarify one thing about goals regarding potential metrics: we already have a multi-band exposure time metric, and the original proposed goal was to turn it into a multi-band depth metric, factoring in the other effects besides exposure time that determine the depth.

humnaawan commented 1 year ago

we had a meeting 10.30-11.30 PT earlier today. here's a summary:

there was also some talk about how to incorporate the PSF area into the uniformity metric. its not clear to me if thats something that needs to be handled on top of the 5sigma single-visit depth that comes with the sims outputs. The output schema says that fiveSigmaDepth column is for

The magnitude of an isolated point source detected at the 5-sigma level

presumably accounting for things like the seeingFwhmEff (although maybe what was intended was that we'll use this to do the inverse weighting when coating?).

it was decided that we do not have to look at this multi-band depth metric with extinction in order to isolate the impacts of rolling, although @beckermr pointed out that seeing etc is correlated with extinction so this would/should show up in our findings.

we also talked about how these metrics need to be run on no-rolling vs rolling sims (for v3, we have the no-roll analog of the rolling baseline).

i dont think anybody got to coding -- but we really need to figure out who is writing these metrics and when.

ixkael commented 1 year ago

What about figurint out a function of the depth in the 5 bands from DC2?

egawiser commented 1 year ago

To co-add depth across 5 bands, we need to first figure out what spectral index we'd like to assume for a typical source. If one assumes a souce that is flat-in-f_nu (i.e. constant in AB magnitudes) then all of the different filter visits can be treated as if they were in a single filter, and coaddm5 works for the combined filter the same way it works for a single filter. However, the bluest sources are roughly flat in f_nu; a more typical source is flat in f_lambda, which means getting brighter like lambda^2 in f_nu. So a more realistic approach would scale everything to i-band using a factor of (lambda_filter/lambda_iband)^2 turned into magnitudes, where those lambdas are central wavelengths for each filter.
Visits in red bands will be effectively deeper than their nominal AB mag depth and vice versa for visits in blue bands.
Individual visits would then get co-added with the offsets included by using 1.25 * log10(sum_i(10.0 ** (0.8 * [m_i + factor_i] ))) wherefactor_i = 2.5 *log10( (lambda_filter_i/lambda_iband)^2)

beckermr commented 1 year ago

Good point on the spectra. We had planned to ignore that but it sounds like we should not. For those puzzling at this formula, what is going on is that the flux of an object at the depth limit is proportion too the square-root of the sky background. Thus the fluxes are summed in quadrature and then a square-root is taken.