Open keflavich opened 2 years ago
Yes, there's some documentation in the papers and online here: http://decaps.skymaps.info/catalogs.html obviously in the context of decaps.
For what it's worth I don't see the FWHM as derived from the second moment, but instead of as derived from the effective number of pixels in the PSF, and then transformed to what you would get for a Gaussian for the equivalent effective number of pixels.
Yes, qf is what you describe. For unsaturated sources, I'd be deeply skeptical of anything with qf < 0.6 or so; the suggestion is that we're on the edge of a chip or a bad region and don't even have the peak on a good pixel. I'd put tighter bounds if I wanted very good photometry, more like 90-95%.
fracflux is intended to be a measure of how blended the source is. It's the PSF-weighted flux of the stamp after subtracting neighbors, divided by the PSF-weighted flux of the full image including neighbors. So if you have no neighbors around, it's 1. If typically half the flux in one of your pixels is from your neighbors, it's 0.5, where 'typically' is in a PSF-weighted sense.
The lbs quantities are "local background subtracted" quantities, where I've repeated the fit on the neighbor subtracted image with a new sky pedestal. They haven't proven valuable. Were they significantly different from the other fluxes, that would be a sign to worry.
The documentation on the decaps webpage for the iso quantities is: "flux derived from linear least squares fit to neighbor-subtracted image; significant difference from ordinary flux indicates a convergence issue" which is a pretty good summary of how I feel about it. There's really not anything fundamentally different about it than about the normal fluxes, except that it comes out of something simple rather than a big linear least squares package.
I think of spread_model as a simple size estimator. decaps webpage documentation is "sextractor-like spread_model; positive means the source is broader than a PSF". Here's the sextractor documentation: https://sextractor.readthedocs.io/en/latest/Model.html#model-based-star-galaxy-separation-spread-model
Thanks, that's extremely helpful. The fracflux especially, I completely did not appreciate that it was measuring overlap.
Well, the LBS is super helpful in some cases. This is an example I have:
where I believe the "flux" measurement has gone totally off the rails. This is likely a separate issue, but I've had this problem occur with subtle changes in the weights. The "flux" values are definitely wrong. The LBS flux looks less wrong.
For comparison, this is what it should look like:
Yeah, I would want to know more there. Not knowing anything, it looks like the sky isn't being subtracted, leading everything to have a significant positive flux. Clearly the local background subtraction is helping, but I don't really think it should; something else is also going on.
yes, that's exactly right - it took me a while to figure this out, but the sky model is very bad when the weights are subtly changed. Any idea what would cause that? I'm still digging to diagnose, but your instincts are certainly more finely tuned than mine.
I'm using nskyx=nskyy=1
, which iiuc, is a simple constant model for each star cutout? Maybe I should be using something higher-order, but if it's not working for 0 or 1st order, I'm wary of doing so.
Try nskyx=nskyx=0. That's what we use in DECaPS and probably unWISE.
There are two contributions to the sky model: a linear one that is fit simultaneously with all of the stars, and a higher spatial order one that's just a median in different cells. We rely on the latter when nskyx=nskyx=0. nskyx=nskyx=1 means a pedestal is fit for the full image, not each stamp.
The former is nice in the blended limit in that it lets the sky get fainter and all of the stars get brighter simultaneously; i.e., it speeds convergence. But it couples all of the pixels together, so one bad pixel can ruin the analysis of a whole image. In your case, I'd guess you have some pixels with value = 0 or -infinity or just really implausibly low given the uncertainties and the true sky background. And the code is minimizing chi^2 by badly undersubtracting the sky to accommodate them.
So I'd guess you need to either mask them (ivar = 0) or get rid of the global sky fit so they don't ruin the whole image.
Thanks, that's again very helpful and clear. I bet there are some zeros, or near-zeros (1e-N with N>5?), in my data that I did not mask out. So I'm going to try masking that out, but I'll also try different orders.
The fitting gives you the model image, so the diagnostic I usually use is chi = (data-model)/sigma. I'd expect you'd find a small number of pixels have very large negative chi, yeah.
Another documentation question.
What are all the column names in the returned catalog? https://github.com/schlafly/crowdsource/blob/d0bb2ebfa3d57611079581b4601f9056efa0c615/crowdsource/crowdsource_base.py#L954-L960
Some are pretty obvious:
qf
What are:
qf
quality factor? It seems to be the integral over the PSF for each star including only "good" pixels. Is there any rule-of-thumb for what is a 'good enough' number?impsf
is; is it the image convolved with the PSF? Still not sure what this statistic tells me.compute_iso_fit
function is doing in a quick read; it looks like it's performing a least-squares fit between the PSF and the data allowing only for amplitude and shift variation? But I might be misreading itI'm sure some of this is documented in the literature in your or others' papers, but even if so, it would be helpful to have a statement of what is intended by the code so I don't mis/overinterpret what it's doing.