Open jchiang87 opened 4 years ago
In the #desc-dc2-validation channel, @rmjarvis commented that we should still try avoid stars within a magnitude or so of saturation because of brighter-fatter contamination, presumably even after the B/F correction is applied. The flux at which saturation starts will be a function of the PSF and will set in at lower fluxes for visits with better seeing. Here's a plot of maximum pixel value vs flux for source footprints in raw data from visits in each of the six bands of Run2.2i y01 that have the best seeing: (Note that the y-band points are shifted to the right wrt the other bands owing to our out-of-focus simulations in y.)
The plateau at ~1.4e6 ADU corresponds to the saturation value for our sims (100k e- and gain of 0.7 e-/ADU). For the visits with the best seeing, saturation starts at ~1.5e6 ADU (dotted line), so if we want to back off by 1 magnitude, that would correspond to a flux limit of 6e5 ADU, or a S/N of ~770, assuming the flux errors are Poisson.
To check the range of limiting flux values, I simulated grids of stars with various fluxes for nominal realized seeings of 0.50 arcsec and 1.20 arcsec. For the worse seeing case, saturation starts at 7.4e6 ADU (dashed line), corresponding to a flux limit of ~3e6 ADU or a S/N limit of ~1700.
Given the range of limiting values, I'm not advocating that we set an upper limit in the configs on either the flux or S/N for the PSF star selector, but it seemed useful to post these results in case an algorithmic approach was taken.
this may be a useful thing for the Princeton algorithm workshop.
There's a SUSPECT
bit which is set for all pixels above a configurable threshold, and which is propagated to the sources. It was introduced to support SuprimeCam where our Japanese colleagues didn't trust bright stars -- I've always suspected that they didn't apply linearity corrections. Anyway, we could set this a factor of say 2 below saturation then ignore stars with the SUSPECT
bits set.
Have you looked at the flags in your objects? All the saturated objects should be labelled already
Using the following
processCcd.py
configsresults in a ~8 mmag offset between
psf_mag
values (computed frombase_PsfFlux
) andcalib_mag
values (computed fromslot_CalibFlux
, which isbase_CircularApertureFlux_12_0
for our current configs) in at least thegrizy
bands. The offsets can be clearly seen in the IN2P3 processing of the Run2.1.1i data using weeklyw_2019_19
: (Note that the 8 mmag offset was originally seen in coadd data comparing to reference catalog stars in i-band. See various the various entries in Slack following this posting by Stephane Plaszczynski.) An alternative processing at NSCA usingw_2019_34
anddoes not show the 8 mmag offsets: Comparing the stars marked as
calib_psf_candidate
between the two processings, it's clear that the.signalToNoiseMax=200.0
setting is removing the bright stars from the selection: The config at issue was set inobs_lsst
here. However, in DM-16785, the.signalToNoiseMax=200.0
setting is not really being recommended, especially if we are applying the brighter-fatter correction, which we are. In that issue, the original problem was most prevalent in data withrawSeeing<~0.3
arcseconds. So, as a test of the following proposed configs,I've analyzed the visits in each band from Run2.2i, y01 that have the best seeing (i.e.,
rawSeeing<~0.3
arcsec), using these settings. (As discussed in slack, any S/N cut < 50 wouldn't have any effect because of prior selections at the initial detection stage.) Here is the outcome: These are definitely better than what we obtain with the current values, though there are still noticeable offsets. The underlying cause of the issue appears to be connected with how the aperture corrections are computed and/or applied. This is being discussed in the #dm-science-pipelines channel.