spacetelescope / jwst

Python library for science observations from the James Webb Space Telescope
https://jwst-pipeline.readthedocs.io/en/latest/
Other
571 stars 167 forks source link

Error arrays in source catalog #5182

Closed stscijgbot-jp closed 3 years ago

stscijgbot-jp commented 4 years ago

Issue JP-1611 was created on JIRA by Steven Crawford:

While testing data that was created with Mirage on the Jupyterhub platforms, the linked to data set was processed with the default configurations for the pipeline.   

The source catalog step successfully executed and produced a catalog, but all of the error columns are filled with a values of nan.  

An example file is attached along with links to the data and notebook used. 

jdavies-st commented 4 years ago

The drizzle algorithm itself doesn't produce anything like an uncertainty to propagate. For single-resampled images one could get a handle on it by resampling the input uncertainty array, but wouldn't one still need to account for correlation between adjacent pixels?

It gets much more complicated when resampling multiple input images into a single output. I'm sure @karllark has opinions and good ideas about this.

stscijgbot-jp commented 2 years ago

Comment by Larry Bradley on JIRA:

The error arrays are all NaNs because the level-3 resampling step does not produce error arrays.  That capability will need to be provided before the source catalog can include photometric errors.

stscijgbot-jp commented 2 years ago

Comment by Larry Bradley on JIRA:

Karl Gordon is aware of this issue (we discussed it a while back).  We decided that having no errors in the source catalog was better than having wrong errors.

Properly accounting for correlated errors would be great, but it's probably not necessary from the start.  HST's astrodrizzle has never included correlated errors.

stscijgbot-jp commented 2 years ago

Comment by Steven Crawford on JIRA:

Ah, that is not an easy problem.   

If this work is ticketed somewhere or are the specifications written ?  I didn't turn up anything with a couple of quick checks, but if specifications do not exist, then this should be passed to the CalWG.

 

 

 

stscijgbot-jp commented 2 years ago

Comment by Howard Bushouse on JIRA:

To my knowledge no specifications exist to deal with uncertainties in resampled data.

stscijgbot-jp commented 2 years ago

Comment by Alicia Canipe on JIRA:

Flagging Anton Koekemoer and Swara Ravindranath for the CalWG specs, and assigning Anton for next steps. 

stscijgbot-jp commented 2 years ago

Comment by Anton Koekemoer on JIRA:

I'll respond first to this comment from above:

The error arrays are all NaNs because the level-3 resampling step does not produce error arrays.

Currently in fact there are error arrays in the Stage3 output products from CALWEBB_IMAGE3 - I'm looking at a recent output "_i2d.fits" multi-extension file from running a recent version (0.16.3.dev114+g02d64bc) which has these extensions:

The baseline specifications (impliciitly) capture the use of weighting in the "Image Combination" step, by referencing use of the "drizzlepac" approach which carries out drizzle combination using weighting and producing a weight array.

So, first, since a weight array (evidently inverse variance) is available in the image3 products, these values should no longer be "Nan" in the catalog, and I request here that Larry Bradley update the catalog generation routine to use the values from the weight array in calculating errors.

Now, the above will assume that the errors are correct ... here we enter the details mentioned by others above.

If the current implementation follows drizzlepac (James Davies [X]  to confirm please) then the drizzle combination should ideally be weighted by inverse variance, which for each exposure would also include all the background error terms (including dark current, flatfield errors, Poisson noise from the sky, etc). When also attempting to include Poisson noise from sources in this weighting, there are several subtleties which need to be carefully considered, in addition also to capturing correlated noise terms using covariance matrices – see this page for a useful discussion:

https://outerspace.stsci.edu/display/JWSTCC/2017-01-10+Meeting+notes

For right now, the "baseline" version seems like the most sensible approach, ie following the "drizzlepac" approach in treating all the error terms except for correlated noise. While also acknowledging that correlated noise should eventually be treated properly, eg by following the pixel covariance matrices, as a future "enhanced" version.

 

So, it seems the above "baseline" approach is in fact what is currently implemented, ie the current image3 products are combined following the drizzlepac approach with weight arrays apparently corresponding to inverse variance (including, apparently, noise from the sources in addition to the background noise terms):

{task:id=589}Could James Davies [X]  and/or Howard Bushouse  confirm this please, or amend as necessary?{task}

{task:id=590}Could Larry Bradley  start looking at using these error arrays to calculate photometric errors in the catalogs, so that we can at least have some values to work with in terms of testing how accurate these errors are?{task}

 

stscijgbot-jp commented 2 years ago

Comment by James Davies [X] on JIRA:

The weight maps are not inverse variance currently. Weighting is currently done using exptime of the exposure. When resample was initially implemented, we did not have accurate variances from the ramp fitting. Now we do.

There's an open issue to implement IVM weighting.

800

So this is work that still needs to be done. Do we want to weight by inverse variance for up-the-ramp fitted data?

Once (if?) we do IVM weighting, then the next question can be, do we want to populate the ERR extension with it? Previous guidance from Karl Gordon on the purpose of the ERR array suggested "no", but happy to revisit this.

stscijgbot-jp commented 2 years ago

Comment by Anton Koekemoer on JIRA:

thanks for the reply. In that case the output weight map in this example is a puzzle, because it has sources in it (see image), also with values that are a similar order of magnitude to inverse variance when compared to the rms in the images. But it also doesn't seem to correspond fully to what I'd expect based on HST IVM images, so some further clarification would be helpful about what exactly is in these WHT images. (this was from a NIRCam example using LW with F277W filter with the above software version).

!image-2020-08-26-10-13-45-410.png!

stscijgbot-jp commented 2 years ago

Comment by Anton Koekemoer on JIRA:

for completeness, here's the [SCI] array (left) and [ERR] array (right) of one of the calimage2 output _cal.fits exposures that went into the above image3 output combined image.

!image-2020-08-26-10-26-35-199.png!

stscijgbot-jp commented 2 years ago

Comment by Anton Koekemoer on JIRA:

Scheduled this issue for discussion at JWST Cal WG meeting 2020-09-29

stscijgbot-jp commented 2 years ago

Comment by James Davies [X] on JIRA:

Interesting WHT arrays Anton. Agree these do not look correct. And I've verified I see the same issue in the jwst regression test data. Since we are weighing by exptime, they should be roughly uniform across each detector, not dependent on per-pixel flux rate. These do look like inverse variance or something like it.

That said, I can say for certain that the JWST cal code is calling drizzle using exptime weighting and expecting to get back from drizzle the WHT map that was actually used in the weighting, but it appears drizzle is returning something else. So perhaps is a bug in drizzle?

Having a look through the internal drizzle C code (something I have not done before) it's not clear to me on first glance that it's actually expecting a WHT map to be returned at all. Though clearly this works in the very similar C code in drizzlepac? I think we'll have to have someone who is familiar with the drizzle C code and Python interface to it to investigate this.

I suspect it is either a bug in the internal C code or a bug in the calling API.

Regardless of the resolution of the bug above, it would be good to discuss what sort of errors are acceptable for resampled products in the vanilla pipeline. And this may be different for resampled single images vs resampled combinations of multiple images. What about 2D resampled spectra?

Do we want to use the IVM WHT? Does resampling the ramp fitting variance give us anything useful?

And it would be good to discuss where any of these errors should go: the WHT or ERR extension of the final _i2d or _s2d product?

stscijgbot-jp commented 2 years ago

Comment by Howard Bushouse on JIRA:

Fixed in #5997

stscijgbot-jp commented 2 years ago

Comment by Howard Bushouse on JIRA:

The source_catalog step has been updated in #5997 to compute error estimates for all fluxes and mags. This ticket was not originally created by INS, but I feel it should be tested by someone on one of the instrument teams, so for now I'm going to reassign to Alicia Canipe and leave it to her to determine testers.

stscijgbot-jp commented 2 years ago

Comment by Howard Bushouse on JIRA:

Note to testers: It would probably makes sense to wait to test this update until the updates to the resample step (JP-1944) are complete, so that the inputs to source_catalog have non-zero errors to work with.

stscijgbot-jp commented 2 years ago

Comment by Alicia Canipe on JIRA:

Thanks Howard Bushouse, I added our label for tracking tickets that will need tests and will keep this on our radar. Misty Cracraft FYI

stscijgbot-jp commented 2 years ago

Comment by Anton Koekemoer on JIRA:

tagging Matteo Correnti  (Photom group lead) so that he can help coordinate providing inputs to Alicia Canipe  for testing the error arrays in the source catalogs

stscijgbot-jp commented 2 years ago

Comment by Kevin Volk on JIRA:

Running a test with NIRISS simulated images in pipeline 1.3.3 shows that the source catalogue uncertainties are generally populated.  Some cases have signal and uncertainty values of NaN for a sub-set of the measurement values, which seems odd.  This is probably due to cases where a hot pixel is picked up as a point source (since the simulation has only a few actual point sources but many more "point sources" are catalogued with the default parameters).

With the default catalogue values all sources have very unrealistic S/N ratios: very small signal values with much larger uncertainties.  Signal values of 1.e-06 to 1.e-08 in absolute value are assigned noise values of order 0.05.  "Sources" with S/N ratios of 0.000001 should not be flagged in the source catalogue in my opinion.  Whether this is a problem in code or just a reflection of bad default source detection parameters is not clear.

 

stscijgbot-jp commented 2 years ago

Comment by Alicia Canipe on JIRA:

Also, checking with a NIRCam simulation, I can confirm that uncertainties are generally populated for us, too, and show values similar to those pointed out by Kevin for NIRISS ([https://jwst-validation-notebooks.stsci.edu/jwst_validation_notebooks/calwebb_image3/jwst_image3_nircam_test/jwst_image3_nircam_test.html)] 

Matteo Correnti will you investigate when you have a chance? We can update the testing notebook with your higher quality checks afterward. 

 

stscijgbot-jp commented 2 years ago

Comment by Misty Cracraft on JIRA:

Testing this on MIRI flight data, I have run image3 on a star field in the LMC and been able to plot the magnitude and fluxes against their errors. While I haven't done any in-depth testing on the errors, they look reasonable at first glance (with some outliers). I'm attaching screen grabs of plots from a source catalog notebook. The errors are no longer NaNs. Do we need to have a more thorough analysis of the accuracy of the errors or is the testing that has already been done enough to close this ticket? Anton Koekemoer  Alicia Canipe  The plots are attached to the top of the ticket for reference.

stscijgbot-jp commented 2 years ago

Comment by Anton Koekemoer on JIRA:

Thanks Misty Cracraft! Before we close this, it would be good to hear back from Kevin Volk  once he has had a chance to run this on NIRISS flight data as well, so that he can also confirm whether or not he feels that he needs to analyze those uncertainties more thoroughly.

stscijgbot-jp commented 2 years ago

Comment by Kevin Volk on JIRA:

I have had issues with the uncertainties in the pipeline for photometric calibration–the background uncertainties per pixel are large whereas the distribution of the values looks Gaussian and allows a sigma to be assigned that is much smaller than the values in the error arrays.  I have not looked into the source catalogue uncertainties much.  I just got back from vacation but I will put this on my list of items to look into .

stscijgbot-jp commented 2 years ago

Comment by Kevin Volk on JIRA:

I have looked at the catalogue for a NIRISS sky flat observation set from commissioning, from pipeline version 1.8.2.  The S/N values look, if anything, larger than I would expect.  The plot attached shows the plot of the flux density error on Y and the flux density on X, I see a floor on the error independent of signal which is not what I would have expected.  This is not what I tend to get in my own photometry measurements, but I work on individual images not the resampled ones and usually I work on the rate images.  This is certainly a better result than what I was getting before where the S/N was clearly too small in my measurements when I used the pipeline error values to estimate the background uncertainties.

I think this can be closed at least for now and we have to wait to see if the community or some in-house expert like Jay Anderson comments on the uncertainties in the photometry.

stscijgbot-jp commented 1 year ago

Comment by Howard Bushouse on JIRA:

Kevin Volk suggested closing this on 20-Oct-2022, so I'm closing now.