spacetelescope / mirage

This code can be used to generate simulated NIRCam, NIRISS, or FGS data
https://mirage-data-simulator.readthedocs.io/en/latest/
BSD 3-Clause "New" or "Revised" License
39 stars 41 forks source link

NIRISS WFSS simulations have a wrong count rate #278

Open KevinVolkSTScI opened 5 years ago

KevinVolkSTScI commented 5 years ago

There is clearly some type of issue in scaling from the point source count rates to the dispersed signal count rates. I ran a model with a single source list where the NIRISS magnitudes are all the same, values 22.5, 20.0, and 17.5 for three sources. The simulation also specifies zero background. In the output seed image that is produced from making scene images in the F090W, F115W, F140M, and F150W filters and using the GR150C/F115W combination for the output WFSS simulation, there are two issues: (1) the background level is about 1.3 ADU/second over the image, where I had specified 0.0 ADU/second background in both the imaging scene image .yaml files and the WFSS .yaml file; (2) the output signal from the brightest of the three sources was a factor of 13.09 times brighter than the point source image count rates. It appears that the same factor applies to all three sources. Looking at the pointsources.list files produced by the imaging simulation in the F115W filter and the grism simulation in the same filter shows that the "target" count rates are the same for both cases. However a direct measurement of the count rates in the seed image for the WFSS simulation comes out higher than expected. The input signal for the brightest source is 6444.482 ADU/second in the point source list file, but the signal for the combined spectral orders in the dispersed seed image is 84334 ADU/second after background subtraction. I would say that the measured signal in the seed image is not good to better than a few percent, but on the seed image for the direct image in the F115W filter the direct photometry on the image matches the input value to about 1% without doing anything fancy. In any case the discrepancy is quite large between the seed image for the dispersed case and the seed image for direct imaging.

I do not know if the problem is inside Nor's code (i.e. some unit problem in the sensitivity files) or whether there is something happening on the Mirage side. It depends on where the dispersed seed image is made.

The issue that the background is non-zero when the input parameter files ask for zero background is a bit puzzling. The background image shape is clearly from a NIRISS background image as I see the occulting spots. I tried a different background value but the output signal is the same. Nor suggests that this is because the WFSS code is not up to date. This needs to be checked.

KevinVolkSTScI commented 5 years ago

Also, as noted previously in the comments to pull request #254 the NIRISS WFSS grisms reduce the overall count rate by 20% compared to direct imaging. This is not currently in the code.

bhilbert4 commented 5 years ago

Just a couple quick comments without thinking too deeply about this, because I have to head home:

  1. To be sure you have the latest WFSS software, you'll need to either create a new environment using the mirage environment file (which probably needs some version updating for packages other than NIRCam_Gsim), or clone NIRCam_Gsim (if you don't have it already), or pull the master branch (if you do already have it).

That said, I don't recall making any changes related to WFSS background being set to a user input. I think the background is always pulled from the appropriate file at the moment.

  1. As for the 20% reduction in count rate for NIRISS WFSS relative to imaging, shouldn't this be built into the WFSS sensitivity files? We can put it in the code, but then we need to make sure that any future updates to NIRISS sensitivity files also don't include this factor. I consider hard-coding things like this to be a last resort because it's easy to forget to update them in the future if need be.
KevinVolkSTScI commented 5 years ago

I was running using a new conda environment with the 19 March version of Mirage. It should be using the current version of all three packages.

Do I understand correctly that the background parameter in the .yaml file is not used when the WFSS mode is being simulated? I would think that one should have a mechanism in place to scale the expected background. The background should not be a fixed value for a given filter, but on the other hand this is a secondary issue. The background rate currently being used for NIRISS seems to be too high. The value I obtained in my test was about 1.3 ADU/second. The expected WFSS background in the F115W filter for medium zodiacal light conditions is 0.365 ADU/second. There is a factor of ~4 discrepancy here.

We have to check whether the 0.8 factor is included. This is given in the ETC throughput files, and it should be in the sensitivity files that Gabe produced. In such a case the 0.8 factor does not need to be in the code unless the signal value in the pointsources.list file is used to normalize the spectrum in some way. However, I am still of the opinion that this factor should be used in making the pointsources.list file for the dispersed case because that will give the actual expected total count rate of all the orders. Leaving the imaging count rates in the source file for the dispersed vase seems misleading to me unless we somehow make it clear that these count rates listed are for direct images and not for the dispersed seed images.

This also raises a subtle issue: if we use the background value from the .yaml file to scale the grism backgrounds: is the value entered before or after the 0.8 factor is applied? I have assumed that it would be after the 0.8 factor, so if I say 0.05 ADU/second background for a WFSS simulation that value would be multiplied by the normalized background image and the result used in the seed image. Hence imaging simulations with a background level of 0.05 ADU/second would correspond to a dispersed model with a background of 0.04 ADU/second.