spacetelescope / jwst

Python library for science observations from the James Webb Space Telescope
https://jwst-pipeline.readthedocs.io/en/latest/
Other
554 stars 161 forks source link

Source finding algorithm in tweakreg_catalog is inadequate #6295

Closed stscijgbot-jp closed 7 months ago

stscijgbot-jp commented 3 years ago

Issue JP-2259 was created on JIRA by Howard Bushouse:

It's becoming increasingly clear from reports from the NIRISS, NIRCam, and MIRI teams that there are no good parameters that can be used for source finding in tweakreg in its current form (using DAOStarFinder) that reliably finds only stars without finding lots of garbage. And with the tweakreg algorithm, garbage in leads to garbage out. More often than not, these unreliable catalogs misalign rather than align.

Misty Cracraft, in the #jwst_calpipe_testing slack channel, has reported that there's nothing that works with MIRI imaging data with very clear stellar sources that are (by eye) very easy to detect. The open issue JP-1570 makes the same argument for NIRISS via work Kevin Volk has tried, and James Davies [X] has reproduced. And for NIRCam, I think we are getting a bit lucky with the only tweakreg regression test that we have in our regtest suite, in that it finds lots of garbage, but doesn't result in severe misalignment. I've chatted with Bryan Hilbert (NIRCam) here too, and they just turn it off for simulated mirage data.

DAOStarFinder, which is used in tweakreg_catalog, can find the peaks of JWST stellar PSFs, but it also finds all the chunky bits around it. One can tune this a bit, but it is very sensitive to parameters like kernel_fwhm, threshold, etc. Basically, the Stetson DAOFind algorithm designed for ground-based single-mirror telescopes with CCDs does not work will for JWST. Not a huge surprise.

The fact that it finds multiple sources slightly offset from the actual peak of the star blows up tweakreg. This is the worse possible situation, and results in "reliable" matches to to the offset "sources" and results in very assured misalignment on the part of tweakreg.

Meanwhile, the source_catalog step works great at the end of calwebb_image3. It finds sources reliably with little to no contamination from artifacts. We could just use photutils.segmentation to find sources in tweakreg_catalog. It would be a drop-in replacement, much more reliable, and most importantly, would not depend strongly on initial parameters instrument-to-instrument. A quick check of trying it using the photutils documentation example showed it worked well for the NIRISS example from JP-1570.

The (maybe) downside is that it would also detect non-stellar sources (galaxies), which might not have as precise centroids. But there will be lots of these sources for JWST data, so perhaps the increased number (and lack of proper motion) will mean that the decreased centroid accuracy of individual sources will be swamped by the huge increase in number of sources for extragalactic fields.

Regardless, the calibration pipeline should at a minimum, not make the alignment worse. So we should use an algorithm that at least can reliably detect real sources without detecting junk. Later optimizations to filter those real sources (use only peaky sources with low ellipticity) might offer improvements, but are not necessary now to improve over the status quo where we can not reliably detect sources at all.

Note that large existing surveys using HST already do this. 3D-HST (Candels deep imaging fields + grism) did tweakreg alignment using all sources, not just stars, and did so via SExtractor, i.e. segmentation, and produced mosaics with excellent alignment across multiple instruments. Not many stars in those deep fields. So there's already precedent for this. I suspect other surveys have too.

A possibility for the future, as mentioned by Larry Bradley, would be to use webbpsf and photuils.detection.StarFinder together. I.e. PSF detection. Nobody has tested this yet for JWST, but probably has promise. It would require generating a PSF kernel for each instrument/filter combo (reffile or on-the-fly?) and then using it for detection in photutils. It may be a more optimal solution, but one would have to show that using photutils.segmentation is somehow not performing well in order to justify the added work implementing this. but something to think about and maybe get some INS teams to fiddle with.

jdavies-st commented 2 years ago

Switching from DAOStarFinder to IRAFStarFinder in tweakreg_catalog visually improves NIRCam regression test data alignment that we use in the jwst pipeline nightly tests. This is F444W data on the LONG channel. For MIRI F770W imaging data, the default (par reffile) parameters work very well with IRAFStarFinder, but need a bunch of tweaks with unrealistic S/N thresholds of 200 to work are needed when using DAOStarFinder. Results of the two are similar and look good.

Again, this is just mirage and mirisim data we use in our nightly regression tests, but using IRAFStarFinder certainly seems to be an improvement for some and no worse than the status quo (with less finicky parameters) for others.

stscijgbot-jp commented 1 year ago

Comment by Howard Bushouse on JIRA:

Anton Koekemoer assigned to you for now for discussion by the Cal WG.

stscijgbot-jp commented 1 year ago

Comment by Anton Koekemoer on JIRA:

Howard Bushouse thanks, though making this a new ticket effectively 'resets' extensive discussion in the linked tickets, especially JP-2196 which is now labelled as "closed" (although the issue is ongoing) - so I've added the people from that ticket as watchers to this one, to help provide some continuity to this discussion.

stscijgbot-jp commented 1 year ago

Comment by Anton Koekemoer on JIRA:

I also see that the tweakreg approach/ algorithm was decided in 2015, when perhaps the behaviour of this source finding algorithm in the presence of the JWST PSF was not yet as well characterized.

To start considering a path forward, following on from this comment in the description about potentially using the source_catalog approach from calwebb_image3 as a solution --

Meanwhile, the source_catalog step works great at the end of calwebb_image3. It finds sources reliably with little to no contamination from artifacts. We could just use photutils.segmentation to find sources in tweakreg_catalog. It would be a drop-in replacement, much more reliable, and most importantly, would not depend strongly on initial parameters instrument-to-instrument. A quick check of trying it using the photutils documentation example showed it worked well for the NIRISS example from JP-1570.

The (maybe) downside is that it would also detect non-stellar sources (galaxies), which might not have as precise centroids. But there will be lots of these sources for JWST data, so perhaps the increased number (and lack of proper motion) will mean that the decreased centroid accuracy of individual sources will be swamped by the huge increase in number of sources for extragalactic fields.

-- this approach of using galaxies together with stars can be successful in high-latitude fields, and is an approach we've used for a couple of decades in large surveys like GOODS, CANDELS, HFF, etc. While the caveats are correctly described (eg galaxies having somewhat less precise centroids) this is is generally mitigated when enough are detected (eg a couple of hundred or more), and when the galaxies are downselected to use only those that are more compact and have minimal substructure. So this would support the use of the source_catalog / photutils approach as a "drop in" replacement.

Scenarios that remain potentially problematic with cataloging approaches (either the current tweakreg or a replacement catalog approach) would include targets with significant extended emission across the field, eg bright galactic nebulae or very extended galaxies, or other scenarios where there are insufficient numbers of compact sources. Targets like these may need a different approach than those that are amenable to cataloging.

To chart a path forward it would be helpful to see summary descriptions of the actual datasets that were used for the tests so far (incl target descriptions, instrument, mode, filters, numbers of exposures, and reports of the results with tweakreg).

 

 

 

stscijgbot-jp commented 1 year ago

Comment by Misty Cracraft on JIRA:

For the MIRI tests, the data used is a set of 8 dithered images (2 exposures each at 4 dithers). There are 50 large, obvious bright stars (some close together), but easily identifiable. The images are for the MIRI Imager and in the 1130W filter. When the images are well aligned, tweakreg doesn't do harm unless the parameters are set poorly, but for a test, one of the eight images was offset by one arcsecond in both spatial directions, and no matter what parameters were used, the final image showed double sources. Tweakreg was unable to find the appropriate shift to bring the one offset image back into alignment. Notebook link here: https://jwst-validation-notebooks.stsci.edu/jwst_validation_notebooks/tweakreg/jwst_tweakreg_miri_test/jwst_tweakreg_miri_test.html

 

stscijgbot-jp commented 1 year ago

Comment by Anton Koekemoer on JIRA:

thanks Misty Cracraft this is very helpful -- also, if the other instrument testing representatives would like to post similar information here for their instruments, that would be helpful too.

This issue is scheduled for discussion at our next JWST Cal WG meeting 2021-09-07

stscijgbot-jp commented 1 year ago

Comment by Kevin Volk on JIRA:

Various people on the NIRISS team have played around with the different source finding algorithms, and the general result for everyone is that the best way to find sources in the simulated NIRISS images is to use the photutils.segmentation package, specifically the source_detect routine, and to find the peak via moments on small image segment around the reported source positions (rather than just using the segmentation map to find a centre position).  This is particularly the case for the shorter wavelength NIRISS filters where the PSF undersampling is a big issue.  One might wish to use to segmentation map alone to get a centre position in the case where the core of the PSF is saturated, but otherwise that seems less accurate than using the centre-of-mass method on a small image cut-out around the position from the photutils routine to get a revised position.  That is important if sub-pixel accuracy is needed in the position such as for PSF modelling.

Even when the stars are properly identified there can be an issue with the default tweakreg rotation/shift/scale option for the parameters.  For simulations of dithered images the results are much better if the tweakreg step is directed to only use a shift; that is not surprising since the images are indeed all shifted by constant offsets in the Mirage simulations to a high degree of precision since the NIRISS distortion is expected to be small.  It is not clear to me why the rotation/shift/scale option usually leads to bad results in the image combination in cases where the sources are properly identified on the input images.  One can of course simply set the default parameter for the tweakreg step to just solve for a shift, so this does not require a code change.  I note it here because not all bad tweakreg results are due to not finding the stars properly.

stscijgbot-jp commented 1 year ago

Comment by Anton Koekemoer on JIRA:

thanks Kevin Volk  for the NIRISS update, that's very helpful.

In terms of the source finding algorithms, and especially the results using photutils/source_detect, if you could point us to links showing these runs (or provide the names of those in NIRISS who did the runs) that would be really useful, especially for deciding about potentially proceeding with using the photutils approach in the pipeline for tweakreg, as well as providing tests to subsequently validate the pipeline implementation.

In terms of the separate question about the tweakreg alignment approach, I'll mention that there is now available (or soon should be) a new option in tweakreg, namely "rshift", which would solve simply for shift and rotation, and not scale (nor skew). For the operational pipeline my preference would also be to solve for the least number of parameters to achieve a successful fit (eg, either just a shift, or possibly shift+rotation, but perhaps not shift+rotation+scale if we don't expect significant changes in scale between different exposures). We may cover this at the meeting too, if there's time after discussing the source finding algorithm issues.

stscijgbot-jp commented 1 year ago

Comment by Kevin Volk on JIRA:

People who have looked at the source detection codes in photutil are Paul Goudfrooij, Swara, Tony Sohn, and me (and maybe also André Martel).  But none of these comparisons are shown in a notebook (as far as I know) so I cannot point you to some link to see the results.

In terms of the "rshift" option, it looks like for NIRISS dithered simulations from Mirage that the shift is what is needed, and using rshift produces worse results rather than finding a rotation of zero and the same shift.  Maybe it was just because the number of stars used was low and hence things were not strongly constrained.  We have to see on-orbit whether a rotation is needed due to guide stars in the star trackers along with a shift, or whether a shift alone will work.

stscijgbot-jp commented 1 year ago

Comment by Anton Koekemoer on JIRA:

Thanks Kevin Volk  for the additional details. If whoever is coordinating the pipeline testing for NIRISS could collect these tests/ results that have bene carried out by the people you mention, then that would be useful for documenting the issue with the current approach and assessing a path forward for improving it. This would also include more quantitative results on how the various options compare (shift, rshift, rscale), in terms of the number of stars in the simulations, how bad the results were in each case, and what parameter settings led to improved results.

stscijgbot-jp commented 1 year ago

Comment by Kevin Volk on JIRA:

The main test case for pipeline testing is a simulation with 10 isolated bright stars all of the same brightness but not saturated.  Various versions of this test scene are available differing in the amount of noise/cosmic rays/flat field structure in the images because the pipeline generally fails to find all 10 stars in these scenes in the default simulation and I was trying to see if changing these aspects of the scene would improve the detection efficiency.  The scene has a four-point dither pattern and for testing tweakreg I have four scenarios:  the normal Mirage model, a simulation where one of the four dither positions is slightly offset, a simulation where one of the four dither positions is slightly rotated, and a simulation where one of the four dither positions is slightly offset and rotated.  None of the simulations change the scale of any of the images.  The tests of tweakreg are intended to see if it can correct for these small imperfections in one of the four images to align the scene properly.  But what is found is that even the unperturbed scene does not get combined properly in the pipeline with the default parameters.  I have tried running the tweakreg step with the default, currently shift/rotation/scale and with just a shift.  I do not think that I have tried the shift/rotation option on the normal simulation.

For comparison I have the same simulations but with the normal star list one would expect for the pointing rather than the artificially simplified scene.  All the tests that were run on the simplified scene were also run on the normal scene as well for comparison.  While the tweakreg step did a little better on the scene with many stars than on the scene with just the 10 bright stars, it still did not produce a tight PSF with the default parameters.

Independent of that testing of tweakreg I have made a number of comparisons of the pipeline source catalogue with the input Mirage star catalogues and with attempts to make source catalogues with different photutils routines: the peak_find routine, the IRAFStarFinder routine, the DAOStarFinder routine, and the detect_sources routine that the pipeline uses.

As far as I know only Paul has made other tests of the tweakreg routine, in the context of mosaiced/dithered image sets, while the other people have been testing the various cataloging options for different commissioning needs rather than testing tweakreg.

 

stscijgbot-jp commented 1 year ago

Comment by Mihai Cara on JIRA:

??Even when the stars are properly identified there can be an issue with the default tweakreg rotation/shift/scale option for the parameters.  For simulations of dithered images the results are much better if the tweakreg step is directed to only use a shift; that is not surprising since the images are indeed all shifted by constant offsets in the Mirage simulations to a high degree of precision since the NIRISS distortion is expected to be small.  It is not clear to me why the rotation/shift/scale option usually leads to bad results in the image combination in cases where the sources are properly identified on the input images.  One can of course simply set the default parameter for the tweakreg step to just solve for a shift, so this does not require a code change.  I note it here because not all bad tweakreg results are due to not finding the stars properly.??

I am very surprised that, for example, "general" (shift+rotation+skew) fit would give worse results than just "shifts" fit. This could happen only in extreme situations such as having all sources along a straight line or having all sources bunched together in a small region of the image and the rest of the image be empty (this would result in a poorly conditioned problem for solving for rotations). However, this may be irrelevant if sources are not detected correctly. 

stscijgbot-jp commented 1 year ago

Comment by Kevin Volk on JIRA:

I do not seem to have a description of the full set of tweakreg test files linked in any Jira ticket, just some sub-sets described in different tickets having to do with the imaging pipeline testing.  Hence I will give a detailed descrption here.

The files for my tweakreg tests exist on /ifs/jwst/wit/niriss/kevin.  Not everyone has access to this disk area.

There are several directories with variations on the same test, each with a different set of Mirage simulations:

simple_dithered_imaging_example

simple_dithered_imaging_example1

simple_dithered_imaging_example2

simple_dithered_imaging_example3

simple_dithered_imaging_example3a

complex_dithered_imaging_example

The directories named "simple_dithered_imaging_example" have a simplified scene with 10 bright, isolated sources.  The individual "simple" directories effectively differ in the way the background signal is generated (all defaults in the original simple_dithered_imaging_example directory; no bad pixels, no 1/f noise, flat fielding with a uniform flat field, in some combinations in the other directories) to test whether these types of effects are causing issues in the tweakreg runs.

The run with the default parameters in Mirage is in "simple_dithered_imaging_example".  One would hope that for testing this would be sufficient and one would not have to also deal with the other "simple" directories where I am using simplified dark/gain/flat field/bad pixel map reference files in various ways.  But since I keep seeing issues in the results I have the additional testing directories.  The "complex_dithered_imaging_example" directory was originally done with the default Mirage parameters but in the most recent round of testing I changed it to use some of the alternative reference files.

Within each directory are the Mirage files for making simulations of a four-point dithered exposure in one filter of NIRISS.  Then there are additional Mirage parameter files for making simulations for dither position 2 with small perturbations on the pointing parameters:  a small shift, a small rotation, and the combined shift and rotation.  After the Mirage runs are done for these perturbed parameters, a script is used to change the headers so that the _uncal.fits files have the original pointing information, which is therefore slightly incorrect.  The point of the testing is to see if tweakreg can correct for these small changes.

Within each directory there are sub-directories with the _uncal files that make up a test, so 4 files per sub-directory:

run1       uses the original unperturbed _uncal files

run2      uses the dither 2 _uncal file with a small offset

run3      uses the dither 2 _uncal file with a small rotation

run4      uses the dither 2 _uncal file with both the shift and rotation

The way the testing is done is to run the pipeline in each sub-directory, and then look at the combined _i2d files that result.  So each directory has the results of running the pipeline on the particular set of 4 input _uncal files.

Assuming that the _uncal files produced by Mirage remain OK for pipeline testing one only needs the set of seven _uncal files that have had the headers fixed in order to do the testing.  The individual directories listed have all the files needed for running Mirage and for fixing the headers. 

Note that the files in these directories are probably not consistent with the current version of Mirage because there have been changes in the requirements for the input .yaml files and star source list files between when I last generated the simulations and the current situation with Mirage.  The changes needed are not too difficult, but I have not had the time or need to do this.  The original testing was done in something like version 7.5 of the pipeline.  I believe that I have had to regenerate the models once or twice since the original runs, but not recently.

For a number of the simulations where I override the default pipeline reference files to substitute simpler versions one needs to run Mirage and the pipeline with these "fake" reference files explicitly used.  These "fake" reference files are in another directory.  As noted above, the optimistic case would be where this is not needed.  And of course using the current reference files and CV3 darks does produce the most realistic simulations we can get for testing, possibly with some need to enhance the cosmic ray rate to match what is going to be seen on-orbit.

If we wish to adapt these files for a unit test I would like to regenerate the simulations, and it is not clear whether the cases with "fake" reference files can be used in such a test.

 

stscijgbot-jp commented 1 year ago

Comment by Kevin Volk on JIRA:

In reply to Mihai, I too was surprised that the shift/rotation/scaling option does not work well in the dithered test observations.  I had expected that it would produce a scaling of 1 and small or no rotation so the dominate effect would be a shift.  That is not what happened in the test case, however.  I think there was both a scaling and a rotation found in the test cases, leading to the bad result in the combined i2d image.  One might postulate that the simpler test with only 10 stars has too few constraints to solve for the parameters, and indeed the more complex example with many more stars worked better than the simple case--but it still was not really a good result.  I have not tested a case of a mosaic where one would expect a small rotation each time a guide star is acquired because of the limitations of use of the star trackers.  The case of just dithers is going to be a fairly important science case for the WFSS mode, which is why that is the initial focus of my testing.  People may wish to dither and do a mosaic to fill in the spectral area uniformly for the different WFSS blocking filters, but I think initially in the GO programs they just are doing the dithers for the PSF reconstruction.

stscijgbot-jp commented 1 year ago

Comment by Mihai Cara on JIRA:

I re-run Misty Cracraft notebook and in the last run of tweakreg (tweakreg ON with one mis-aligned image) I added the following settings:

pipe3.tweakreg.tolerance = 0.3 pipe3.tweakreg.searchrad = 2 pipe3.tweakreg.use2dhist = True

 

Now I get excellent alignment - see attached images - and tweakreg reports that shifts are 0.72 arcseconds in agreement with simulated shift of 0.0002 degrees. 

 

In general, I recommend leaving use2dhist ON, using searchrad ~2*max_misalignment (may be larger if enough real sources are detected). Tolerance should be set to smallest possible values to minimize incorrect matching (like what Anton Koekemoer was describing in the meeting) but large enough to account for alignment errors (including rotations, skews, etc.)

stscijgbot-jp commented 1 year ago

Comment by Misty Cracraft on JIRA:

I also got similar results when putting the shift at ~ 0.5 arcseconds, and using tolerance and search_radius of 1.5. So yes, altering the search parameters helped to improve the results.

stscijgbot-jp commented 1 year ago

Comment by Anton Koekemoer on JIRA:

thanks very much Misty Cracraft  and Mihai Cara , I'm glad that indeed the MIRI sources could now be matched successfully when setting the search radius to 2" (which is approaching the largest search radius values generally needed, even for HST data).

Could you say please which algorithm you used, ie shift, rshift, rscale, etc?

Also I'm not sure that I'm finding the attached images mentioned by Mihai Cara , would you mind please checking that again?

stscijgbot-jp commented 1 year ago

Comment by Mihai Cara on JIRA:

Anton Koekemoer - I just re-did exactly the same notebook as Misty Cracraft. Looking at the log, it seems that fitgeom was set to 'rshift'. I removed the images from the comment itself (they were taking a lot of space) but I see that they are still listed at the top of the ticket as attachments.

stscijgbot-jp commented 1 year ago

Comment by Anton Koekemoer on JIRA:

thanks Mihai Cara  I've now found the images at the top, and thanks also for verifying that you used rshift for this.

So it seems that this particular MIRI simulation is now sorted out, in terms of demonstrating that a reliable solution can be found, and thanks very much again.

It would be good to know if the MIRI team can explore other parts of parameter space too (eg, different numbers of stars, some dither patterns with different amounts of overlap, and/or different filters) - could Misty Cracraft  let me know please what else the MIRI team may have planned for this, or if you'd like to discuss it further in more detail?

Also, if Mihai Cara could work with Kevin Volk  to investigate his NIRISS results then that would be great too, since it seemed that those results were perhaps probing different issues than with MIRI.

 

stscijgbot-jp commented 1 year ago

Comment by Mihai Cara on JIRA:

Anton Koekemoer I added notebook to this ticket. The only difference is that I put all the files from Misty's test in an 'orig' subdirectory relative to the notebook (instead of downloading them in the notebook).

stscijgbot-jp commented 1 year ago

Comment by Mihai Cara on JIRA:

I am looking into Kevin Volk issues and reading comments.

stscijgbot-jp commented 1 year ago

Comment by Misty Cracraft on JIRA:

Anton Koekemoer Until about two months ago, we had no testing notebooks for tweakreg at all. Most of our testing at this point (for the full pipeline) has been to show that the algorithms work and can give good results for a test case or two. None of the tests written up to this point in time have tried to cover parameter space with different modes, different science scenes, etc. If we intend to try to cover those, we'll have to get more buy in from the teams, testers, leads, etc. New tests require different simulations, different notebooks or extremely long test notebooks. Are you saying that tweakreg needs special handling? Yes, it would be fantastic to cover many different cases for all of the tests that we have in place, but I see that as more of a long term goal.

stscijgbot-jp commented 1 year ago

Comment by Anton Koekemoer on JIRA:

Misty Cracraft thanks for your comment. There are a couple of different issues:

So I was mainly asking about the first of the above, ie just whether there are any other simulations by the MIRI team, apart from the single case of the 4-pt dither with F1130W and 50 stars, to get a sense of whether the same default tweakreg parameters work for other types of data, or whether they may need to have different values.

stscijgbot-jp commented 1 year ago

Comment by Mihai Cara on JIRA:

Kevin Volk Would it be possible to put all the data described in your commend above in a directory accessible to me? Also, do you have a notebook or a script that shows how you do the processing?

stscijgbot-jp commented 1 year ago

Comment by Mihai Cara on JIRA:

I run source finding on the "publicly" available data (some sort of "run1"?) - on one of the images - and I get pretty good results. However, tweaking these parameters may not be possible in the pipeline. That is, there should be some extensive testing on different simulated data sets (filters, various intensities, SNR, etc.) to see if a set of parameters good for most data can be identified. The issue here is that NIRISS data seem to be well undersampled for most filters. The segmentation algorithm mentioned at the meeting computes centers of stars using center-of-mass. For well sampled PSF I would recommend using DAOFIND instead due to higher accuracy but the segmentation method from photutils may be more reliable for NIRISS.

I think testing needs to be done to see if it would be possible to identify a good set of parameters for DAOFIND algorithm. Otherwise, center-of-mass may be more robust for the undersampled NIRISS data. I'll add some images later that show what I have achieved with DAOFIND for F158M data.

stscijgbot-jp commented 1 year ago

Comment by Kevin Volk on JIRA:

Yes indeed it is clear that the issue is under-sampling in the shorter wavelength NIRISS filters.  Various NIRISS people have tried to find a good set of default parameters for DAOFind but we have not been able to find something that works well on all our test simulations.

I can post some of the _uncal files for Mihai to work with, but I am busy right this moment.  As for how to run the test, I just just the pipeline with the default parameters.  I have a script for this but it does not do anything special.  The point is to test the default pipeline not to use special parameters.  Since there are many parameters and I do not understand tweakreg or drizzle (I have tried...but failed) I have not tried to run a grid with a wide range of parameters.

 

stscijgbot-jp commented 1 year ago

Comment by Mihai Cara on JIRA:

I am not suggesting exploring the full set of catalog creating, image alignment and drizzling parameters. If sources are not detected correctly, we should focus only on that. For this particular problem, I think threshold and kernel's FWHM are the main parameters to be tweaked to get good source detection. I know I could find a good set of parameters for one of your images. [I do not need the rest of the data since modified WCS (introduced mis-alignment) is irrelevant for this purpose] The main question is: Can we find a set of parameters that works for all NIRISS images for one single filter? Then this optimal set for each filter could be saved in some reference file.

One would need to run multiple simulations with varying SNR, varying star/PSF intensities, varying noise and background levels to see what optimal parameters are. In your simulation you know exactly where you placed the sources. Then you can use existing DAOFIND algorithm to test how well it recovers the positions of the sources as a function of detection threshold and FWHM. You do not need to run the entire step for this. Here is minimal code that does this:

 

from jwst.tweakreg.tweakreg_catalog import make_tweakreg_catalog from jwst.datamodels import ImageModel

files = [  'jw01085002001_01101_00001_nis_cal.fits',  'jw01085002001_01101_00004_nis_cal.fits',  'jw01085002001_01101_00002_nis_cal.fits',  'jw01085002001_01101_00003_nis_cal.fits', ]

im = ImageModel(files[0]) # for now, work with the first file

# In the pipeline snr_threshold is not dependent on kernel_fwhm. # But maybe it should be? kernel_fwhm = 1.5 # Gaussian kernel FWHM in pixels snr_threshold = 400 / kernel_fwhm**2 # SNR threshold above the bkg brightest = 100 # Keep top ``brightest`` objects `peakmax = None # Filter out objects with pixel values >= ``peakmax```

catalog = make_tweakreg_catalog(     im, kernel_fwhm, snr_threshold,     brightest=brightest, peakmax=peakmax ) print(repr(catalog))

# save a XY catalog for DS9 with open('catalog.xy', 'w') as f:     for x, y in zip(catalog['xcentroid'], catalog['ycentroid']):     f.write(f'\{x+1:.12f} \{y+1:.12f}\n')

This code shows how to get fitted coordinates of the detected sources. One just needs to add some code to automatically compare fitted values to the simulated values and then run this for various simulations and try to find optimal parameters.

I am not saying this is easy and quick. I am also not qualified to evaluate whether the additional effort is worth (possibly) increased accuracy over the less accurate (but more robust) center-of-mass estimate.

 

stscijgbot-jp commented 1 year ago

Comment by Mihai Cara on JIRA:

I just uploaded several images: one shows the entire image with sources and regions representing fitted centroids and zoomed-in images of 4 sources (top 3 and one at the bottom) representing 40% of all sources. The rest look just as good. This result was obtained with the parameters in the script above: 

kernel_fwhm = 1.5 # Gaussian kernel FWHM in pixels snr_threshold = 400 / kernel_fwhm**2

The issue is to figure out optimal parameters (kernel_fwhm, ``snr_threshold) as a function of instrument, filter, SNR, ...

stscijgbot-jp commented 1 year ago

Comment by Mihai Cara on JIRA:

I performed another comparison of centroinds found with tweakreg (from drizzlepac!), photutils' IRAFStarFinder and DAOStarFinder. tweakreg (red regions) and IRAFStarFinder (blue regions) are almost indistinguishable and well centered on the star/PSF while the DAOStarFinder (used in jwst.tweakreg; green regions) is way off.

See 'comparison.png'. On the left - original data; on the right - "density enhanced" image (data convolved with kernel).

My understanding is that DAOFIND operated on the density enhanced image. If that is correct, I can't see why it would fail to detect the real peak in the image. It seems to me this could very likely to be a bug in the implementation of the algorithm rather than a fundamental issue with the algorithm itself. I will file a GitHub issue describing this and will report back the conclusion.

Edit: GitHub issue: astropy/photutils#1243

!comparison.png|height=400!

stscijgbot-jp commented 1 year ago

Comment by James Davies [X] on JIRA:

Nice detective work there Mihai. If it's just a bug in the DAOStarFinder implementation in photutils, then we could easily switch to IRAFStarFinder (we're already using circular gaussians) and that would fix it short term, whereas a longer-term fix would require a release of photutils.

stscijgbot-jp commented 1 year ago

Comment by Matteo Correnti on JIRA:

Mihai Cara and Kevin Volk do you observe the same discrepancies in centroid accuracy between DAOStarFinder and IRAFStarFinder also for less undersampled NIRISS filters? If you change the FWHM value for DAOStarFinder are you able to improve the centroid accuracy? 

I'm asking because I don't recall such offset during my test with NIRCam images (neither Mattia Libralato for MIRI) so I was wondering if it's a DAOStarFinder issue/bug or just the inability to provide a good result for severely undersampled PSFs. I will perform some further test for NIRCam and report back. 

stscijgbot-jp commented 1 year ago

Comment by Kevin Volk on JIRA:

I have noted that the issues with the source detection are not as bad in the longer wavelength NIRISS filters where the sampling is better.  Most of the my tests have concentrated on the short wavelength filters.  We expect more science use for the shorter wavelength filters since these are used for the WFSS mode, hence that is the first concern in the testing.  But I have not seen the type of large offset that Mahai showed above, that I remember.  In terms of the FWHM value, I have tried a couple of values above 2.5 pixels but these do not seem to make a significant difference in the results.  The main differences we have noted are in the detection of noise as point sources rather than in systematic issues in the positions of the real stars.

stscijgbot-jp commented 1 year ago

Comment by Matteo Correnti on JIRA:

Kevin Volk that sounds more familiar. Also for NIRCam, I have seen "issues" with the source detection that requires a careful fine tuning of DAOStarFinder (or IRAFStarFinder) parameters and/or some "a posteriori" quality cut using the sharpness and roundness, but not w.r.t. the centroid accuracy of the "real" stars. That's why I was surprised by Mihai Cara findings. 

stscijgbot-jp commented 1 year ago

Comment by Misty Cracraft on JIRA:

Anton Koekemoer We already alter the kernel_fwhm parameter in our MIRI parameter reference files, while leaving other parameters at the default values. We feel this parameter is the most important one to change by filter. We may do a little more testing on some simulated data that we already have available at a few different wavelengths to see if we think those parameters work, but mostly we think that we'll find out with real data where we have problems and can put more effort into fine-tuning at that point.

stscijgbot-jp commented 1 year ago

Comment by Anton Koekemoer on JIRA:

Misty Cracraft thanks for the additional details, and for describing that you already adjust kernel_fwhm for different MIRI filters. In terms of on-orbit data, we'd want to not only optimize parameters for improving alignment, but also avoid potentially making the alignment worse (compared to not running twekareg). If you have MIRI simulations in a few different filters, perhaps you could point Mihai Cara to those as well, to run different source detection/ centroiding algorithms (tweakreg from drizzlepac, photutils IRAFStarFinder and DAOStarFinder) if the MIRI data behave differently than NIRISS/ NIRCam.

stscijgbot-jp commented 1 year ago

Comment by Misty Cracraft on JIRA:

Of course. We have some simulations that were created for public use. It's four images (one image at each of four dither positions), for three filters: F560W, F770W and F2100W. They're in a Box folder divided out in stages (uncal in stage 0, rate in stage 1, cal in stage 2, combined in stage3). You can access any of these files here: https://stsci.box.com/s/2b6evrie2swaybarxhdmpqizktl3g0wm. Let me know if there are problems getting to this data.

stscijgbot-jp commented 1 year ago

Comment by Anton Koekemoer on JIRA:

thanks very much. If Mihai Cara can let us know whether he can access that link then that would be good, so that he can run the different source detection/ centroiding algorithms (tweakreg from drizzlepac, photutils IRAFStarFinder and DAOStarFinder, or others) to see how well these work for the MIRI data.

stscijgbot-jp commented 1 year ago

Comment by Mihai Cara on JIRA:

I am working with Larry Bradley to identify the issue. It may have been a user error on my part. Still investigating. I will report back when we have a more information. (Larry reports that with different settings he can find peaks in real sources + many more in PSF wings which is what Kevin reported earlier. However, in a previous post I showed that there exists FWHM settings that result in "clean" detection of real sources only) With regard to the accuracy of methods. When fitting a Gaussian-like function to an undersampled PSF, it is almost guarantee not to get accurate results. However, my "feeling" is that center-of-mass would likely be least accurate. For the methods that fit Gaussians to PSF I would expect the error not to exceed 0.5 pixels (which is still too large). The easiest thing to do right now is to switch to IRAFStarFinder, at least temporarily until we sort things out (or we can add an option/parameter to select source finding method and set IRAF's as default).

stscijgbot-jp commented 1 year ago

Comment by Anton Koekemoer on JIRA:

thanks Mihai Cara  and just to respond about the Gaussian issue, for the case of MIRI, the data are generally not as undersampled as the other instruments – the MIRI PSF FWHM ranges up to 7.3 pixels for the reddest filter. But indeed feel free to try out whichever source detection/ centroiding algorithms you would like to run on the MIRI data, as you have for the other datasets.

stscijgbot-jp commented 1 year ago

Comment by Anton Koekemoer on JIRA:

And a question actually (for either Mihai Cara  or Larry Bradley) about the photutils center-of-mass algorithm: I assume this is flux-weighted, ie calculates the moments of the pixel flux distribution along the x and y axes (so far I'm looking at https://photutils.readthedocs.io/en/stable/centroids.html but I'm not yet finding a page presenting the mathematical formalism for the algorithm used).

Also, I wonder whether you've considered (or perhaps implemented) the windowed centroids used in SExtractor, eg XWIN,YWIN and related quantities ([https://sextractor.readthedocs.io/en/latest/PositionWin.html]), or have you already investigated how those perform relative to the center-of-mass algorithm?

stscijgbot-jp commented 1 year ago

Comment by Misty Cracraft on JIRA:

James Davies [X] I'm doing some tests now on MIRISim data to figure out if our current set of parameters give decent results, and I don't need to use S/N near 200 anymore. I did in earlier tests, but am now down in ranges from 10-40 based on data. Maybe we need to update some of the data in the regression tests? I think a number of fixes to the pipeline (and things like flat reference files) have helped the S/N values in my tests, but I'm surprised you're still needing such high values in the regression tests.

stscijgbot-jp commented 1 year ago

Comment by Mihai Cara on JIRA:

I have attached a notebook (daofind-issue.ipynb) that successfully finds enough good sources in almost all images - I have excluded dithered versions of the same image - that have been provided by Kevin Volk and Misty Cracraft using both DAOFIND and IRAF's starfind algorithms. There is a difference between the interpretation of the threshold parameter between DAOFIND and IRAF methods. Fundamentally, it appears that FWHM should be tweaked for each instrument/filter combination. And maybe threshold should be tweaked for each instrument, filter, and maybe exposure time (i.e., background level, SNR, ...)?

stscijgbot-jp commented 1 year ago

Comment by Paul Goudfrooij on JIRA:

Folks, the report referred to below might well be interesting to those involved in addressing this ticket. Centering method centroid_2dg generally works best (especially for undersampled PSFs), but it is quite slow. IRAFStarFinder is second best, only running into trouble for undersampled PSFs whose true center is close to pixel edges. Then again, one can eliminate stars with centroid coordinates close to half pixels in X and Y prior to doing the alignment. My recommendation is to use IRAFStarFinder (rather than DAOStarFinder) for the tweakreg step.

"Accuracy and Precision of Centroid Algorithms in the photutils Python package for NIRISS Point Spread Functions" https://www.stsci.edu/~goudfroo/NIRISSdoc/Centroid_Accuracies_Precisions_NIRISS_v2.pdf

stscijgbot-jp commented 9 months ago

Comment by Paul Goudfrooij on JIRA:

To further clarify my previous comment: For most NIRISS filters (many of which feature severely undersampled PSFs), DAOStarFinder does not work adequately no matter how one tweaks the parameter values. As mentioned in the Technical Report in my previous comment, this is due to its use of 1-D Gaussian fits to projected light distributions during the convolution step and that really does not work well with undersampled PSFs. IRAFStarFinder works way, way better for centroiding stars in NIRISS imaging with those filters.

As such, we strongly recommend using IRAFStarFinder in tweakreg, at least for NIRISS. During our testing, we got very good results with the following parameter values:

stscijgbot-jp commented 9 months ago

Comment by Anton Koekemoer on JIRA:

thanks Paul Goudfrooij  this is very helpful, especially the parameter values which will now help with moving this ticket forward into implementation, where we can certainly explore your proposed approach of implementing IRAFStarFinder as an option in tweakreg, and select it by default at least for NIRISS using the parameters you mention, as a starting point (and if it turns out that NIRCam can benefit from this then we may end up selecting it as a default algorithm there too). So this ticket will be among the others discussed at the upcoming quarterly prioritization meeting, and we'll be sure to indicate the high urgency/ impact values that it currently has for both NIRISS and NIRCam.

stscijgbot-jp commented 8 months ago

Comment by Howard Bushouse on JIRA:

Anton Koekemoer Can you remind me if this topic has had a Cal WG discussion? If we're going to work on this in the next couple months for B10.2, we'll need some consensus on how to proceed. The discussion in the ticket(s) seems to mention the addition of the IRAFStarFinder routine (as an additional option to the existing DAOStarFinder) as the most frequent solution. Is that what's desired at this point or do other options need to be explored yet too?

stscijgbot-jp commented 7 months ago

Comment by Ned Molter on JIRA:

I'm now working on this ticket.  So far, I've made a local version of the pipeline that permits the option of DAO or IRAF star finders, and now I'm writing regression tests for those cases.  Kevin Volk If you have it available, would you please point me to some NIRISS flight data that give bad results from DAOStarFinder?

I've also confirmed using NIRCAM data what Mihai Cara found out from [this photutils GitHub issue|[https://github.com/astropy/photutils/issues/1243]], which is that the DAOStarFinder is sensitive to the sharphi parameter but IRAFStarFinder is not.  DAOStarFinder needs sharphi>=3 for this test case; see attached images.  Kevin Volk just to be sure, have you tried using DAOStarFinder with a very large sharphi parameter?

!starfind_comparison_nircam_sharphi3.png|thumbnail!!starfind_comparison_nircam_sharphi1.png|thumbnail!

 

stscijgbot-jp commented 7 months ago

Comment by Anton Koekemoer on JIRA:

Thanks Ned Molter  for getting started on this ticket.

To answer the question from Howard Bushouse  (which came while I was away on travel last month, sorry about my delayed reply) - yes this ticket has been discussed in previous CalWG meetings (incl 2021-09-07, 2021-10-05), also with the conclusion of continuing discussion in this ticket for incorporating improvements. The result of those meetings (and the above discussion) is the following:

stscijgbot-jp commented 7 months ago

Comment by Paul Goudfrooij on JIRA:

Hi Ned Molter - The technical report I attached to my comment on 10/14/22 has (I believe) the most extensive testing of the various centroid algorithms for NIRISS data. In that report, I used simulated data rather than flight data, because the simulated data places the PSFs in known locations so that the measured centroids directly translate to centroid errors. I used sharphi = 3.0 for DAOStarFinder in that study.

stscijgbot-jp commented 7 months ago

Comment by Ned Molter on JIRA:

Anton Koekemoer Thanks for the nudge to also include Photutils Source Detect - should be easy enough.

Thanks for the clarification Paul Goudfrooij.  I did read that memo, and I see the filenames there, but in what repository can they be accessed?  (sorry if that is a basic question, I'm new here).

I'll defer to Howard Bushouse about whether it's better to have a regression test running on simulated data with a known result, or on flight data.  Based on my (limited) understanding, the primary objective of the regression tests is to ensure consistency and reasonableness of the results as the codebase changes over time, and not necessarily to evaluate the quality of the algorithms relative to one another.