Open sosey opened 6 years ago
@jdavies-st fix anything I missed above or add new information that's missing? @npirzkal - comments, enlightenments? @gbrammer - comments, enlightenments?
I wouldn't rectify anything but rather just make 2D cutouts from the science frames.
On Fri, Nov 17, 2017 at 9:48 AM, Megan Sosey notifications@github.com wrote:
@jdavies-st https://github.com/jdavies-st fix anything I missed above or add new information that's missing? @npirzkal https://github.com/npirzkal - comments, enlightenments? @gbrammer https://github.com/gbrammer - comments, enlightenments?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/STScI-JWST/jwst/issues/1488#issuecomment-345263393, or mute the thread https://github.com/notifications/unsubscribe-auth/ABgRNpyJQVXk5ws-Q3gLbhtrVEuelXKpks5s3ZyigaJpZM4QhGGF .
Rectifying is an old way to say that the x,y positions in a dispersed image/stamp is remapped as wavelength vs cross dispersion direction. If you, as is currently done in Stage 2, simply compute a transformation to give the wavelength of any native pixels, then no resampling of then original pixels is needed. At some point, in Stage 3 when you will want to compute multiple single spectra, you will have to decide on how to combine/resample things. For Stage 2, a non resampled mapping makes sense and this would look like a regular 2D cutout but with some wavelength calibration information. I see that Stage 3 plans on combining R and C grisms observations. What would be most useful is to make sure that R and C (dithered) observations are combined separately. Combining R and C grisms can be done but one needs to beware that it will lead to bad combinations for non point sources or sources with offset emssion lines etc. Is there a plan to make sure that Stage 3 does not simply combine everything (R and C)? R/C/both combined spectra could be generated if people really wantt o see R and C mode combined but I would really like to see R and C spectra also separately processed.
Can we agree on these?
Questions:
Agree that resampling the 2D dispersed cutouts is a dead end. I believe the calibration working group wants these as "quick look" type products for the archive. Is this correct @hbushouse ?
If we did 2D rectified cutouts, how is the cross-dispersion defined? There is no slit.
The cross-dispersion size is defined by the footprint on the segmentation map from the source_catalog step
Yes, the size makes sense to be defined by the segmentation map. But generally with spatial vs spectral rectified slit spectra, the slit is made orthogonal to the wavelength. For grism spectra, there is no slit, so nothing to make orthogonal. So to get the wavelength to map exactly to X or Y, do we just shuffle pixels up and down and live with the distortions in the image of the, say, extended source, or is there some more standard way of doing this?
My thoughts would be that if the direct image used for detection is resampled to a linear tangent plane, then it might be desirable to have the dispersed image undistorted in the same way so that the segmentation map and direct image of the source can be used to aid spectral extraction. Does that make sense? Is that the plan?
As far as I understand that's not the plan. @npirzkal and @gbrammer should fix any misunderstanding I have though...
Both the shape of the trace and the distortion in the grism image is taken into account with the grism transform polynomials + the distortion model, both of which are in the GWCS model chain. So when we calculate where the pixel for a given wavelength resides we should end up at the correct location. I believe the calculated pixel location exists at the correct x,y location on the trace in the dispersion direction, the size of the cross-dispersion used for extracting is currently set to the size of the minimum bounding box calculated from the undistorted direct image segmentation map. The teams may decide to change the extraction size, but I haven't seen any specifics yet.
Is the archive going to store quick-look extractions of individual objects?
I think you can just define an effective local WCS with cross-dispersion being perpendicular (in pixels) to the predominant dispersion axis. Both axes would be defined relative to the central pixel of the direct cutout, i.e., where you evaluate the trace polynomials. You can then drizzle / resample these however you want, say, combining both R and C grisms. Any definition for extended objects will be insufficient without the full "modeling" capabilities Nor describes in the FIGS paper and I do with grizli, which I think is fine here for spec2 and spec3.
As far as I understand, archive is storing quicklook versions of the extracted 2d data from the SPEC2 pipeline, I don't think these need to be resampled, they could just be the straight 2d images we create now. @hbushouse?
The individual 2D spectra for each object+order are saved to their own extensions and each have a valid GWCS object which can be used to probe the trace. This GWCS object is used by the extract 1D code as well, where the decision about how much to extract in the cross-dispersion direction is also made. @philhodge knows exactly how the extract1D code does this. This all happens during SPEC2.
In SPEC3, I think is where the resample using multiple observations of the same object might occur. Whether or not we need to produce a formal resampled (drizzled) 2D image in SPEC3, or whether we can get away with using the GWCS object + some rejection algorithm to go straight from the stack of 2D extractions to the 1D extraction is what I'd like to understand more :)
The modeling capabilities that @gbrammer and I discussed yesterday are most appropriate for the contamination image that I think we need to add to the SPEC2 pipeline as another extension for each object.... except where modeling of the contamination may be used as part of the rejection algorithm.
That is indeed my understanding. The modeling is something that was also discussed by @hbushouse and myself and my understanding is that is what stage 3 (SPEC3?) is supposed to handle. As I mentioned earlier, it would be good to keep in mind that combining different PAs (e.g. GRISMR and GRISMC) is not trivial and likely more trouble than it is worth. Also, and @hbushouse can comment on this, this SPEC3 capability would not be part of Build 7.2. Is this correct?
On Nov 21, 2017, at 12:32 PM, Megan Sosey notifications@github.com wrote:
As far as I understand, archive is storing quicklook versions of the extracted 2d data from the SPEC2 pipeline, I don't think these need to be resampled, they could just be the straight 2d images we create now. @hbushouse https://github.com/hbushouse?
The individual 2D spectra for each object+order are saved to their own extensions and each have a valid GWCS object which can be used to probe the trace. This GWCS object is used by the extract 1D code as well, where the decision about how much to extract in the cross-dispersion direction is also made. @philhodge https://github.com/philhodge knows exactly how the extract1D code does this. This all happens during SPEC2.
In SPEC3, I think is where the resample using multiple observations of the same object might occur. Whether or not we need to produce a formal resampled (drizzled) 2D image in SPEC3, or whether we can get away with using the GWCS object + some rejection algorithm to go straight from the stack of 2D extractions to the 1D extraction is what I'd like to understand more :)
The modeling capabilities that @gbrammer https://github.com/gbrammer and I discussed yesterday are most appropriate for the contamination image that I think we need to add to the SPEC2 pipeline as another extension for each object.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/STScI-JWST/jwst/issues/1488#issuecomment-346102130, or mute the thread https://github.com/notifications/unsubscribe-auth/AHNknC1ypgBm_16OhjfLuiDYcBAnBiytks5s4wlLgaJpZM4QhGGF.
@hbushouse @stscicrawford My understanding for the work in this ticket is that it's only for spec3 if it gets done at all, and shouldn't be included in 7.2. What's the current thinking?
Agreed -- moving to build 7.3
cc @swara13 @camipacifici
@jdavies-st and I were trying to figure out what we need to do for resampling the grism modes
For level 2 processing
Q: They are still single extractions of single objects and distortion is taken out when passing though the WCS to get the wavelength/pixel locations, so I'm not sure what resampling these means. Is there any reason to just not do this and use the un-resampled images for quicklook?
The only other thing I can think of is what I discuss below, simply rewrite the 2d image onto a linear wavelength scale in the dispersion direction using the WCS to extract the correct pixel location for each wavelength and recording the flux.
For level 3 processing
level-3 processing combines multiple observations of each object, regardless of dispersion direction, into a single, resampled and combined image .... or into a stacked flux cube?
There was mention of "not asking the pipeline to interlace images, since only one of the allowed drizzle patterns fully samples the 4 x 4 grid of half-pixel steps. Particularly for the vanilla pipeline, we need to use the same combination algorithms as the other instruments (drizzling for direct images, something else for spectra)."
Q: What is the something else, simple shift and add with 4x subsampling for the half pixel steps?
Q: what is the reasoning behind making a combined 2d image from multiple extract_2d images? - to understand where the processing is going with this, or is it just to have an image?
Q: Does it make any sense to take a stack of 2d extractions, use the WCS to get the flux for each wavelength-pixel - the wcs returns the dispersed pixel location of the wavelength for the object- then we can do a summation with rejection on the flux if needed, and either write out the explicit wavelength-flux information in the 1D form, or just write back the 2d image with the measured pixel flux. The WCS itself takes the distortion+trace shape into account when you are asking it for information, so when you get the pixel+flux information out that's all been removed and you could effectively just write the flux for the pixel down, making the x-axis a linear function of wavelength in the output image. You could use the cross dispersion size for the pixel box extraction on the wavelength, or record each pixel in the cross-dispersion axis for that wavelength inside the bounding box. With this method, it doesn't matter when you are combining row or column dispersions for multiple images, and you don't need to worry about rotations or resampling.
I needs enlightenment :8ball: