Open esheldon opened 7 years ago
we still need to figure out the flags, but I'm changing the strategy. Rather than null he weight image for certain flags, do this at read time. That way a mistake now does not propagate into all later analyses
This is now I think the main issue for doing a meaningful run for any of the codes; we need to know what bits should be considered bad
@TallJimbo Here is the flag map I think. Currently I am setting the weight map to zero (not using those pixels) except for the two indicated. do you agree with this approach?
HSC_BADPIX_MAP = {
'BAD': 1,
'SAT': 2,
'INTRP': 4,
'CR': 8,
'EDGE': 16,
'DETECTED': 32, # should not mask
'DETECTED_NEGATIVE': 64, # should not mask
'SUSPECT': 128,
'NO_DATA': 256,
'BRIGHT_OBJECT': 512,
'CLIPPED': 1024,
'CROSSTALK': 2048,
'NOT_DEBLENDED': 4096,
'UNMASKEDNAN': 8192,
}
Yup, I think that's fine. Some possible modifications to consider (not obviously better than what you have):
BRIGHT_OBJECT
are essentially our bright star masks, and they're quite large. We'd normally mask these objects at the catalog stage later (if at all) but include them during pixel-processing.CLIPPED
should only appear on the coadd, and there it just indicates objects for which the PSF model may be slightly off (because we rejected junk from one of the epoch-level images that we hadn't masked in that image separately). It indicates that one of those epoch-level images has an unmasked artifact and that the coadd PSF is probably too bad for lensing, but other coadd measurements (e.g. colors) are probably fine.NOT_DEBLENDED
is only relevant if we're using deblended pixels.OK, I'll run with what I have for now but leave this issue open so we can optimize later.