Open KevinVolkSTScI opened 4 years ago
If someone needs to look at the output files these are found at
/ifs/jwst/wit/niriss/kevin/focus_car_second_new_field/
The input .yaml file is jw01085005001_01101_00002a1_nis.yaml. The output dispersed seed image is jw01085005001_01101_00002a_nis_uncal_dispersed_seed_image.fits.
I am looking into the scaling of the input spectra to see what is causing the large values of order 1e+20. I had thought that Bryan put in code to rescale the spectra to the input magnitude, but if that is not the case then the large values may be due to the input spectra not being scaled down to the input magnitudes in the HDF5 file. I have a scaling applied proportional to the input magnitude in the filter, but maybe there was an overall scaling in the input spectra that was missed in making the HDF5 input file.
The negative values are the most immediate issue in the dispersed seed image, if one assumes that the scaling is my fault in making the spectral catalogue file.
Hmmm. The direct seed image looks ok. And the segmentation map looks ok. It might be worth updating NIRCam_Gsim and re-running. There have been a couple updates to that package since July 6. @NorPirzkal does this make any sense to you?
As for AWS, the different number of sources is most likely due to #507. That lowered the default value for the pixel threshold value to be included in the segmentation map. It also made that a user-settable parameter. This was because in some other NIRISS simulations of faint galaxies, there were sources not making it into the segmentation map at all because they were too faint.
I looked at one of your hdf5 files, and all the spectra have units of W/(m^2 micron). Mirage should then convert these spectra into CGS units, which is what the disperser expects. Is that what you see in the hdf5 file that is output, and do the magnitudes of the spectra make sense?
Hi Bryan,
I can look at this. I have just assumed that the code is rescaling the spectra to the catalogue magnitude so it should not matter if there is a common factor missing from the conversion. If this is not correct, that could be the issue. I have made a .hdf5 file with the scaling to the target magnitudes to try out and see if that fixes the issue of the scaling.
Kevin
From: Bryan Hilbert notifications@github.com Reply-To: spacetelescope/mirage reply@reply.github.com Date: Monday, August 17, 2020 at 2:27 PM To: spacetelescope/mirage mirage@noreply.github.com Cc: Kevin Volk volk@stsci.edu, Author author@noreply.github.com Subject: Re: [spacetelescope/mirage] Bad seed image in NIRISS WFSS model: negative values, and generally much too large values (#544)
I looked at one of your hdf5 files, and all the spectra have units of W/(m^2 micron). Mirage should then convert these spectra into CGS units, which is what the disperser expects. Is that what you see in the hdf5 file that is output, and do the magnitudes of the spectra make sense?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/spacetelescope/mirage/issues/544#issuecomment-675039599, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AI2JIECQB4KLPOUS2LC54Z3SBFZAVANCNFSM4QBVT44A.
If the flux units in the spectrum are set to "normalized" as seen here, then the input spectrum is rescaled to the catalog magnitude. Rescaling is done here: https://github.com/spacetelescope/mirage/blob/master/mirage/catalogs/spectra_from_catalog.py#L578
If the units are given in the file as FLAMBDA or FNU (mks or cgs) as defined here, then no conversion is done and the spectrum is used as-is.
OK so this explains the large values, but it does not address the negative values in the seed images. I will try to reinstall the dispersion code and run the model again with the spectra scaled properly.
I was attempting to run a Mirage simulation of a NIRISS WFSS dispersed image with a spectral HDF5 catalogue. The run was somewhat slow because about 1090 sources were being dispersed, but the code eventually succeeded in making the dispersed seed image. However the run then crashed in the ramp generation part with a message
initial basename: jw01085005001_01101_00002a_nis_uncal.fits /ifs/jwst/wit/niriss/kevin/focus_car_second_new_field/jw01085005001_01101_00002a_nis_uncal_linear_dark_prep_object.fits seg_location: -1 Integration 0: Traceback (most recent call last): File "./runmodels.py", line 64, in
t1.create()
File "/home/kvolk/anaconda3/envs/mirage_6july2020/lib/python3.6/site-packages/mirage/wfss_simulator.py", line 368, in create
obs.create()
File "/home/kvolk/anaconda3/envs/mirage_6july2020/lib/python3.6/site-packages/mirage/ramp_generator/obs_generator.py", line 1237, in create
simexp, simzero = self.add_crs_and_noise(self.seed_image, num_integrations=num_integrations)
File "/home/kvolk/anaconda3/envs/mirage_6july2020/lib/python3.6/site-packages/mirage/ramp_generator/obs_generator.py", line 202, in add_crs_and_noise
ramp, rampzero = self.frame_to_ramp(inseed)
File "/home/kvolk/anaconda3/envs/mirage_6july2020/lib/python3.6/site-packages/mirage/ramp_generator/obs_generator.py", line 1894, in frame_to_ramp
poissonsignal = self.do_poisson(deltaframe, self.params['simSignals']['poissonseed'])
File "/home/kvolk/anaconda3/envs/mirage_6july2020/lib/python3.6/site-packages/mirage/ramp_generator/obs_generator.py", line 1735, in do_poisson
newimage = np.random.poisson(signalgain, signalgain.shape).astype(np.float64)
File "mtrand.pyx", line 3567, in numpy.random.mtrand.RandomState.poisson
File "_common.pyx", line 824, in numpy.random._common.disc
File "_common.pyx", line 621, in numpy.random._common.discrete_broadcast_d
File "_common.pyx", line 355, in numpy.random._common.check_array_constraint
ValueError: lam value too large
An examination of the dispersed seed image showed two issues: first, the general magnitude of the values was far too. large. The IRAF imstat command gives an average value of 4.7e+19 which is absurdly large. The range is several orders of magnitude larger still positive and negative.
kevin> imstat jw01085005001_01101_00002a_nis_uncal_dispersed_seed_image[1] # IMAGE NPIX MEAN STDDEV MIN MAX jw01085005001_01101_00002a_nis_uncal_dispersed_seed_image[1] 4194304 4.699E19 7.424E20 -7.163E21 1.527E23 kevin> display jw01085005001_01101_00002a_nis_uncal_dispersed_seed_image[1] 1 zs+ z1=-2.481982E19 z2=2.850157E19
An examination of the dispersed seed image shows that all the zero order images have a negative ghost associated with them. This should not happen, and it was not the case in some previous runs of older versions of Mirage and the dispersion code.
I seem to recall that the issue of large values in the dispersed image was a problem at some point before and was fixed, but I am not 100% sure about that.
The version of Mirage used here is that of 6 July 2020. I think I updated Nor's code since 6 July, but I cannot be sure of this. I have not been able to determine a version for the dispersing code within Python as I am not sure there of how to do that. I did see that the orders were numbered 1, 0, 2, 3, and -1 rather than having the letters A, B, C, D, and E as was the case before.
As a side note, I was trying to run the same model on the AWS virtual machine with exactly the same input files and all, but on that machine the number of dispersed objects in the field was 844 whilst here the number was 1087. I do not understand that discrepancy. The AWS version is older as it has letter designations for the orders.