spacetelescope / jwst

Python library for science observations from the James Webb Space Telescope
https://jwst-pipeline.readthedocs.io/en/latest/
Other
564 stars 167 forks source link

NIRCam Subarrays for Coronagraphic observations #3055

Closed stscijgbot closed 1 year ago

stscijgbot commented 5 years ago

Issue JP-507 was created by Rosa Diaz:

New Coronagraphic observations reminded us about missing reference files for coronagraphic subarray observations. Before requesting missing reference files, it is necessary to confirm the names of the subarrays. This Jira issue has been filed to document the confirmation from the NIRCam Branch.

Data currently in the DMS Test suite has data with subarray names like MASK430R. I believe these are old and have been replaced by the subarray names provided in JIRA ReDCaT-49 (and also the table below). SDP can write a code that maps all the MASK430R subarray data to the new names, such that there are no issues with reference files or cal processing. It is my understanding SDP should be using or changing to the SUB names and not use the MASK subarray names anymore.

I need to confirm my understanding is correct and that the list is complete.

The list of what I believe are the correct repeated here. (also in JIRA ReDCaT-49). The subarray names we should be using are in the las column called "DMS Subarray"

  |APT Subarray|+|APT Mask|=|DMS Subarray| |FULL| +|Any Mask|=|FULL| | SUB640| +| MASK210R)|=| 'SUB640A210R'| | SUB640| +| MASKSWB)|=| SUB640ASWB'| | SUB320| +| MASK335R)|=| 'SUB320A335R'| | SUB320| +| MASK430R)|=| 'SUB320A430R'| | SUB320+|+| MASKLWB)|=| 'SUB320ALWB'| | SUB128| +| MASK210R)|=| 'SUBNDA210R'| | SUB128| +| MASK210R)|=| SUBFSA210R| | SUB128| +| MASKSWB)|=| SUBNDASWB| | SUB128| +| MASKSWB|=| SUBNDALWBL| | SUB128| +| MASKSWB|=| SUBFSASWB| | SUB64| +| MASK335R)|=| 'SUBNDA335R'| | SUB64| +| MASK335R)|=| 'SUBFSA335R'| | SUB64| +| MASK430R)|=| 'SUBNDA430R'| | SUB64| +| MASK430R)|=| 'SUBFSA430R'| | SUB64| +| MASKLWB)|=| 'SUBNDALWBS'| | SUB64| +| MASKLWB)|=| 'SUBNDALWBL'| | SUB64| +| MASKLWB)|=| 'SUBFSALWB'|

stscijgbot commented 5 years ago

Comment by Howard Bushouse: All of the new subarray names listed above are already defined in the JWST Keyword Dictionary and SCSB data model schemas as allowed values for the SUBARRAY keyword. So once any conversion of values is performed in SDP code, they should work within the various systems.

stscijgbot commented 5 years ago

Comment by Alicia Canipe: Right now, these are the current subarray names (as mentioned in REDCAT-49). However, John Stansberry mentioned that this is a preliminary list, since subarray configurations are still being discussed.

stscijgbot commented 5 years ago

Comment by Mike Swam: Obsolete or not, this mode just came out of the SOR4B OTB simulator run on Feb 5 2019:

2019036182153 WARNING src=level1-prd_frame_time.get_prd_frame_time fsn=jw00628004001_03102_00001_nrcalong msg="No matching record for: SELECT FrameReadTime FROM SubarrayFrameTime WHERE InstrumentName LIKE 'NIRCAM' AND SubarrayName='MASKA430R' AND ReadoutPattern IN ('RAPID','') AND UseAfterDate <='2019-02-05 01:42:56.686' ORDER BY UseAfterDate LIMIT 1"

so SubarrayName='MASKA430R' is still being generated in current S&OC test configurations.

stscijgbot commented 5 years ago

Comment by Rosa Diaz: [~mswam],  I understand that you are seeing that.  However the question here is if SDP has to convert that subarray  'MASKA430R' to 'SUB*430R'.

Clearly, SDP needs to make the mapping because subarray name = MASKA430R does not tell you if you are using SUB64 or SUB320 or any other of the subarrays. Clearly, this has to change. Are there any other subarray fields there you can extract from the DB that tell you if it was SUB320 or SUB64?

stscijgbot commented 5 years ago

Comment by Mike Swam: If 64 and 320 are the subarray sizes, then yes, we will eventually have the dimensions of the data arrays that could be used to distinguish which subarray was in-use. This is only known after the data is processed to a certain point.

stscijgbot commented 5 years ago

Comment by Rosa Diaz: I talked with [~jstans] and he was not sure about what to expect in the header of the files of the coronagraphic observations. So I went ahead and checked several sources. Here what I found and my questions:

1) All the data that I can find DMS has processed has the name of the mask used in the SUBARRAY field, and the CORONMSK field is always NONE. For example in this dataset from OTIS (date 2017-09-19)?

jw82700049001_02103_00001_nrcblong_uncal.fits

Instrument configuration information

INSTRUME= 'NIRCAM ' / Instrument used to acquire the data DETECTOR= 'NRCBLONG' / Name of detector used to acquire the data MODULE = 'B ' / NIRCam module: A or B CHANNEL = 'LONG ' / NIRCam channel: long or short FILTER = 'F300M ' / Name of the filter element used PUPIL = 'MASKBAR ' / Name of the pupil element used PILIN = F / Pupil imaging lens in the optical path? {color:#de350b}CORONMSK= 'NONE ' / coronagraph mask used{color} LAMP = 'NONE ' / Internal lamp state

.........

Subarray parameters

{color:#de350b}SUBARRAY= 'MASKLWB ' / Subarray used{color} SUBSTRT1= 202 / Starting pixel in axis 1 direction SUBSTRT2= 537 / Starting pixel in axis 2 direction SUBSIZE1= 320 / Number of pixels in axis 1 direction SUBSIZE2= 320 / Number of pixels in axis 2 direction FASTAXIS= 1 / Fast readout axis direction SLOWAXIS= -2 / Slow readout axis direction

Because of the SUBSIZE values, it is clear that the used subarray was SUB320. Looking in APT, I see that for MASKLWB we can use subarray=FULL and subarray=SUB320. However, for what [~mswam] says, the data comes down with subarray = MASKLWB, regardless of using FULL or SUB320.

First of all. I don't think we can have CORONMSK=NONE, and I don't believe it is correct to say that the SUBARRAY is the used mask.

2) To add to my confusion, in a table that [~hilbert] shared with me a while ago (attached file nircam_subarray_name_matching.xlss, he tells me that OSS NAME and OPGS NAME for MASK210R are | OSS Name|OPGS Name | |SUB640A210R|SUB640A210R|

But he does not know what are the names for OSS when you use SUBARRAY = FULL. Similar valuesSUB* are for the other Masks and similarly, there is no name for the subarray when using subarray FULL.

If the above is correct, where SDP gets these values?

3) Why the subarray values in APT do not agree with the subarray values in the data?

4) Do we want the user to guess what data they used? If that is the case, why we have the CORONMSK for? Is this for MIRI?

5) [~jstans] tells me that the SUBF and SUBN are for Coronagraphic TAs. From a list  that I got from OSS (and also from Bryan)  I have the following table. |implicit Subarray|+|APT Mask|=|DMS Subarray|TYPE| | SUB128| +| MASK210R|=| SUBFSA210R|TA_SUBARRAY| | SUB128| +| MASKSWB|=| SUBNDASWBS|{color:#de350b}? for TA?{color}| | SUB128| +| MASKSWB|=| SUBNDASWBL|TA_SUBARRAY| | SUB128| +| MASKSWB|=| SUBFSASWB|{color:#de350b}? for TA?{color}| | SUB64| +| MASK335R|=| SUBNDA335R|{color:#de350b}? for TA?{color}| | SUB64| +| MASK335R|=| SUBFSA335R|TA_SUBARRAY| | SUB64| +| MASK430R|=| SUBNDA430R'|TA_SUBARRAY| | SUB64| +| MASK430R|=| SUBFSA430R|TA_SUBARRAY| | SUB64| +| MASKLWB|=| SUBNDALWBS|{color:#de350b}? for TA?{color}| | SUB64| +| MASKLWB|=| SUBNDALWBL|TA_SUBARRAY| | SUB64| +| MASKLWB|=| SUBFSALWB|{color:#de350b}? for TA?{color}|

 For TA I believe this is all the TA subarrays we can expect. Is this correct?

stscijgbot commented 5 years ago

Comment by Mike Swam: Just a reminder, SDP gets the values for SUBARRAY and CORONMSK from the Observatory Status File (OSF), populated by OSFWriter from the engineering data. So the real question is why are the values from this source not agreeing with the descriptions above.

SDP would prefer not to add lots of translation code if it is still possible to get the correct values in the OSF content, since maintaining such translation code over time as things change can be problematic. The data processing system would be more stable in the long-run if we can get the proper values into the OSF files, then SDP can just pass them on as they come down.

stscijgbot commented 5 years ago

Comment by Todd Miller: When invalid values come from SDP to the CAL datamodels,  they are suppressed and mapped to None after a warning.  Moreover,  they are propagated to both CRDS and output products as NONE.   Both CAL processing and CRDS bestrefs see NONE during all calibrations.   This is the pathway for getting NONE into the archive database which CRDS later uses for repro, bestrefs debug,  and creating mock parameter sets which don't evaporate due to pipeline testing.

The CRDS repro/debug parameter sets on the CRDS JWST OPS server were taken from C-string on Nov 12th and may also be stale.  The thing to start with now is the examination of current CAL logs to see if there is a CORONMSK warning which effectively kills the invalid value.  If the pipeline runs are fine,  there may be an issue here with stale mocked CRDS repro parameters on JWST OPS.

 

stscijgbot commented 5 years ago

Comment by Alicia Canipe: [~rdiaz] I thought the APT mask is what goes in the CORONMSK header, so it would be one of the following values: MASK210R, MASKSWB, MASK335R, MASK430R, MASKLWB, NONE. I might be misunderstanding John's table, but I would think that CORONMSK=MASKLWB and then the subarray name would be SUBARRAY=SUB320ALWB, or whatever the appropriate name is based on the detector/observation type with one of the masks: 

  |SIAF AperName|OSF Name| |NRCA5_MASKLWB|SUB320ALWB| |NRCA4_TAMASKSWB|SUBNDASWBL| |NRCA5_TAMASKLWB|SUBNDALWBS| |NRCA5_TAMASKLWBL|SUBNDALWBL| |NRCB5_MASKLWB|SUB320BLWB| |NRCB5_TAMASKLWB|SUBNDBLWBS| |NRCB5_TAMASKLWBL|SUBNDBLWBL| |NRCA5_FSTAMASKLWB|SUBFSALWB|

In John's table in REDCAT-49, the OSF subarray name has the subarray size in it unless it is for target-acq, e.g., SUBND and SUBFS (which only needs to be processed to level-1, according to John's spreadsheet). I'm pretty sure if a subarray is not used, then the subarray name will just be SUBARRAY=FULL (i.e., x,y = 2048, 2048), even for coronagraphic observations. Then the CORONMSK will have the name of whatever mask was used.

John's updated table in REDCAT-49 also tells you if you look in the "Other note" column that all the subarrays in #5 above are for target acq.

stscijgbot commented 5 years ago

Comment by Todd Miller: [While my earlier explanation of NONE is still generally true,  checking the CAL datamodels and CRDS certify constraints showed that both support CORONMSK=NONE as a valid value.  So evidently NONE is not an anomaly every time it appears,  although appearing every time as NONE might be.   Consequently my earlier comments below are less relevant.]

When invalid values come from SDP to the CAL datamodels,  they are suppressed and mapped to None after a warning.  Moreover,  they are propagated to both CRDS and output products as NONE.   Both CAL processing and CRDS bestrefs see NONE during all calibrations.   This is the pathway for getting NONE into the archive database which CRDS later uses for repro, bestrefs debug,  and creating mock parameter sets which don't evaporate due to pipeline testing.

The CRDS repro/debug parameter sets on the CRDS JWST OPS server were taken from C-string on Nov 12th and may also be stale.  The thing to start with now is the examination of current CAL logs to see if there is a CORONMSK warning which effectively kills the invalid value.  If the pipeline runs are fine,  there may be an issue here with stale mocked CRDS repro parameters on JWST OPS.

 

stscijgbot commented 5 years ago

Comment by Rosa Diaz: JWSTKD-288 was filed to make 'NONE' invalid for Coronagraphic observations.

stscijgbot commented 5 years ago

Comment by John Stansberry: The content of Alicia's comment just above seems correct to me. However, it sounds as if the OSF data may not be correct for the case being explored here? Or if not incorrect, not consistent w/ Alicia's summary. Do we know if these data were genearted from an APT Coronagraphic Imaging observation vs. from an Engineering Imaging observation?

There is a potential issue if CoronMask=NONE is invalid. For example, we might use the Engineering Imaging template to try and take data in a coron subarray, in particular to perform photometric calibration of that mode (i.e. for un-occulted sources seen near occulted stars). It would be preferable if the pipeline would run for such data if that can be arranged.

stscijgbot commented 5 years ago

Comment by John Stansberry: The content of Alicia's comment just above seems correct to me. However, it sounds as if the OSF data may not be correct for the case being explored here? Or if not incorrect, not consistent w/ Alicia's summary. Do we know if these data were genearted from an APT Coronagraphic Imaging observation vs. from an Engineering Imaging observation?

(Actually, on second look, the table in Alicia's comment includes both TA subarrays and science subarrays. Not sure if those were just some selected examples, or if there is some confusion about which subarrays are for TA vs. Science. I do agree with Alicia that the entries in Rossy's item 5 [HERE|https://jira.stsci.edu/browse/JP-507?focusedCommentId=307730&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-307730] are all TA subarrays (i.e. the"?" marks can be removed). )

There is a potential issue if CoronMask=NONE is invalid. For example, we might use the Engineering Imaging template to try and take data in a coron subarray, in particular to perform photometric calibration of that mode (i.e. for un-occulted sources seen near occulted stars). It would be preferable if the pipeline would run for such data if that can be arranged.

stscijgbot commented 5 years ago

Comment by Howard Bushouse: The parts of the Cal pipeline that process regular direct images will run just fine with CORONMSK=NONE. It's only the calwebb_coron3 pipeline, which is specific to processing coronagraphic data (e.g. PSF subtraction), that can't run with CORONMSK=NONE, because it won't be able to find the reference files it needs in CRDS (there aren't any reference files for CORONMSK=NONE, nor should there be, because it's unphysical for a coronagraphic exposure). So processing engineering exposures shouldn't be an issue.

stscijgbot commented 5 years ago

Comment by Alicia Canipe: Yeah, sorry, I just grabbed some selected examples in my comment above and not only TA subarrays. I should have clarified.

stscijgbot commented 5 years ago

Comment by John Stansberry: All good, then.

stscijgbot commented 5 years ago

Comment by Rosa Diaz: I talked with [~jstans] and he was not sure about what to expect in the header of the files of the coronagraphic observations. So I went ahead and checked several sources. Here what I found and my questions:

1) All the data that I can find DMS has processed has the name of the mask used in the SUBARRAY field, and the CORONMSK field is always NONE. For example in this dataset from OTIS (date 2017-09-19)?

jw82700049001_02103_00001_nrcblong_uncal.fits

Instrument configuration information

INSTRUME= 'NIRCAM ' / Instrument used to acquire the data DETECTOR= 'NRCBLONG' / Name of detector used to acquire the data MODULE = 'B ' / NIRCam module: A or B CHANNEL = 'LONG ' / NIRCam channel: long or short FILTER = 'F300M ' / Name of the filter element used PUPIL = 'MASKBAR ' / Name of the pupil element used PILIN = F / Pupil imaging lens in the optical path? {color:#de350b}CORONMSK= 'NONE ' / coronagraph mask used{color} LAMP = 'NONE ' / Internal lamp state

.........

Subarray parameters

{color:#de350b}SUBARRAY= 'MASKLWB ' / Subarray used{color} SUBSTRT1= 202 / Starting pixel in axis 1 direction SUBSTRT2= 537 / Starting pixel in axis 2 direction SUBSIZE1= 320 / Number of pixels in axis 1 direction SUBSIZE2= 320 / Number of pixels in axis 2 direction FASTAXIS= 1 / Fast readout axis direction SLOWAXIS= -2 / Slow readout axis direction

Because of the SUBSIZE values, it is clear that the used subarray was SUB320. Looking in APT, I see that for MASKLWB we can use subarray=FULL and subarray=SUB320. However, for what [~mswam] says, the data comes down with subarray = MASKLWB, regardless of using FULL or SUB320.

First of all. I don't think we can have CORONMSK=NONE, and I don't believe it is correct to say that the SUBARRAY is the used mask.

2) To add to my confusion, in a table that [~hilbert] shared with me a while ago (attached file nircam_subarray_name_matching.xlss, he tells me that OSS NAME and OPGS NAME for MASK210R are | OSS Name|OPGS Name | |SUB640A210R|SUB640A210R|

But he does not know what are the names for OSS when you use SUBARRAY = FULL. Similar valuesSUB* are for the other Masks and similarly, there is no name for the subarray when using subarray FULL.

If the above is correct, where SDP gets these values?

3) Why the subarray values in APT do not agree with the subarray values in the data?

4) Do we want the user to guess what data they used? If that is the case, why we have the CORONMSK for? Is this for MIRI?

5) [~jstans] tells me that the SUBF and SUBN are for Coronagraphic TAs. From a list  that I got from OSS (and also from Bryan)  I have the following table. |implicit Subarray|+|APT Mask|=|DMS Subarray|{color:#172b4d}TYPE{color}| | SUB128| +| MASK210R|=| SUBFSA210R|{color:#172b4d}TA_SUBARRAY{color}| | SUB128| +| MASKSWB|=| SUBNDASWBS|{color:#172b4d}TA_SUBARRAY{color}| | SUB128| +| MASKSWB|=| SUBNDASWBL|{color:#172b4d}TA_SUBARRAY{color}| | SUB128| +| MASKSWB|=| SUBFSASWB|{color:#172b4d}TA_SUBARRAY{color}| | SUB64| +| MASK335R|=| SUBNDA335R|{color:#172b4d}TA_SUBARRAY{color}| | SUB64| +| MASK335R|=| SUBFSA335R|{color:#172b4d}TA_SUBARRAY{color}| | SUB64| +| MASK430R|=| SUBNDA430R'|{color:#172b4d}TA_SUBARRAY{color}| | SUB64| +| MASK430R|=| SUBFSA430R|{color:#172b4d}TA_SUBARRAY{color}| | SUB64| +| MASKLWB|=| SUBNDALWBS|{color:#172b4d}TA_SUBARRAY{color}| | SUB64| +| MASKLWB|=| SUBNDALWBL|{color:#172b4d}TA_SUBARRAY{color}| | SUB64| +| MASKLWB|=| SUBFSALWB|{color:#172b4d}TA_SUBARRAY{color}|

 For TA I believe this is all the TA subarrays we can expect. Is this correct? NIRCam says yes (see below)

stscijgbot commented 5 years ago

Comment by Rosa Diaz: Dear [~mallen], I think you should be able to help us figure this out.

Current Coronagraphic observations seem to come with CORONMASK = None and SUBARRAY=MASKXXXX (where XXXX can be any of the possible values like 210R, 335R, etc.). According to the information provided by the NIRSpec Team, the CORNOMASK should be the MASKXXXX value and the subarray=SUBYYYMXXXX, where YYY indicates the subarray used (e.g., SUB320 will mean YYY=320), M can be A or B (i.e., the Module), and XXXX again corresponds to the mask used. You can find this information in the two files included in Jira issue https://jira.stsci.edu/browse/REDCAT-49.  In the comment above from Mike Sway ([~mswam] added a comment - 12/Feb/19 7:22 AM), he states that this information is provided by the OSFWritter and taken from engineering data.

Why are we getting the mask name in the subarray name and the mask information as NONE? And Why we don't get the expected subarray names?

 

stscijgbot commented 5 years ago

Comment by John Stansberry: I think that should be "According to the NIRCam Team,...' just above.

stscijgbot commented 5 years ago

Comment by Rosa Diaz: Dear [~mallen], I think you should be able to help us figure this out.

Current Coronagraphic observations seem to come with CORONMASK = None and SUBARRAY=MASKXXXX (where XXXX can be any of the possible values like 210R, 335R, etc.). According to the information provided by the NIRCam Team, the CORNOMASK should be the MASKXXXX value and the subarray=SUBYYYMXXXX, where YYY indicates the subarray used (e.g., SUB320 will mean YYY=320), M can be A or B (i.e., the Module), and XXXX again corresponds to the mask used. You can find this information in the two files included in Jira issue https://jira.stsci.edu/browse/REDCAT-49.  In the comment above from Mike Sway ([~mswam] added a comment - 12/Feb/19 7:22 AM), he states that this information is provided by the OSFWritter and taken from engineering data.

Why are we getting the mask name in the subarray name and the mask information as NONE? And Why we don't get the expected subarray names?

 

stscijgbot commented 5 years ago

Comment by John Stansberry: I think that should be "According to the NIRCam Team,...' just above.

 

Rossy: Thanks for noticing that! I corrected the mistake.

stscijgbot commented 5 years ago

Comment by John Stansberry: Did a little cleanup on the table of subarray name mapping in the Description, found 1 typo (-{color:#d04437}see red text{color}-).

I note that the mappings for the target-acq subarrays for the 2 bar occulters are non-unique (mask + size are the same for all 3 associated subarrays). The 1st science exposure filter is used in OSS to break that degeneracy, so might be worth documenting above. The mapping is also given in the NIRCam SIAF TR (still in draft, submitted to SOCCER), and in the OSS requirements for NIRCam.

stscijgbot commented 5 years ago

Comment by Marsha Allen: I don't know why the mask name and mask information are not coming out correctly. OPGS uses the database parameters nircam_templates/coronagraph and nircam_templates/modules to build the subarray name that OSS uses. Here's an example rule:

If coronagraph is MASK210R and modules is A,

set SUBARRAY to SUB640A210R

Maybe [~idash] can shed some light on this?

stscijgbot commented 5 years ago

Comment by Ilana Dashevsky: I do not see any problems in the visit files. However, for PPS 14.6, we made several changes to filter/subarray combinations, which are given below.

+NIRCam Coronagraphic Imaging+ (where [A|B] refers to module A or B):

stscijgbot commented 5 years ago

Comment by Rosa Diaz: [~idash], In here you mention combinations for TAs. Is it the same for Scence observations? I.e. combinations like MASK210R,  modules is A, and subarray/aperture SUB640 will result in  SUBARRAY to SUB640A210R

stscijgbot commented 5 years ago

Comment by Rosa Diaz: All the missing reference files were delivered. Note that the NIRCam team decided they will not process any of the TA images, so reference files for these subarrays were not created.

stscijgbot commented 5 years ago

Comment by Howard Bushouse: How will DMS know that it doesn't have to process any of the TA images? Or will they just be left to fail in the pipeline?

stscijgbot commented 5 years ago

Comment by Alicia Canipe: This was a recent request, see https://jira.stsci.edu/browse/JSOCINT-209 and https://jira.stsci.edu/browse/CRDS-260?focusedCommentId=334357&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-334357.

John Stansberry posted an excel spreadsheet with a list of the subarrays in https://jira.stsci.edu/browse/REDCAT-49, with a column noting whether or not the subarray is for target-acq only.

stscijgbot commented 5 years ago

Comment by Rosa Diaz: [~bushouse], What would be easier?

1) SDP make new code and change to not process these files beyonf Level 1b?

2) CAL Pipeline to have a special configuration file for all the NIRCam TAs to do NOTHIG with them.

 

stscijgbot commented 5 years ago

Comment by Howard Bushouse: There's no way to tell Cal to not execute a pipeline (it's like trying to tell a light switch to not turn on - the only way to make that happen is to not push it with your finger in the first place). Hence the solution must be in the SDP workflow manager, to have it not push the button for these datasets.

stscijgbot commented 5 years ago

Comment by Mike Swam: I agree with Howard; if SDP ran calibration at level2a, we'd expect output files and when none came out our archive steps would fail anyway. Best to not call the 2a step in the first place.

stscijgbot commented 4 years ago

Comment by John Stansberry: It looks like these issues have been resolved, assuming the workflow manager now knows not to go past level-1b for the TA data.

I think there is a repeated typo in Ilana's comment from 11/Mar/2019: All the "MIRTAMAIN" references should be "NRCTAMAIN". Nobody seems to have gotten confused, but I figure it's worth mentioning just to be clear.

stscijgbot commented 4 years ago

Comment by Mike Swam: There is no current capability in the JWST DMS for having a dataset stop processing after Level1b.  In order to add such a capability in a future DMS release, we need a way to unambiguously identify the types of datasets that should not process beyond level1b.   Can someone include a very specific ruleset in this ticket that shows how to pick out these cases from all the other data that flows through the JWST DMS?   With that ruleset in hand, SDP can craft special workflows that stop at Level1b and call those when datasets match the ruleset.

I have filed JSDP-1578 to add new workflow types to DMS-SDP, but that ticket will be blocked until we can settle on a rule set for identifying these cases.

 

stscijgbot commented 3 years ago

Comment by Howard Bushouse: We still need consensus on a set of rules that will cause SDP to not do any processing beyond level-1b for the desired exposure types. Can someone please indicate here exactly what values of EXP_TYPE, SUBARRAY, etc. should not be processed beyond level 1b? Is it any and all NIRCam TA exposures (i.e. EXP_TYPE='NRC_TA*') or any NIRCam TA exposure that uses certain SUBARRAY values, or what?

stscijgbot commented 3 years ago

Comment by John Stansberry: Anything with EXP_TYPE='NRC_TA*' needn't be processed beyond level 1b, regardless of subarray. We don't have complete reference files for our TA subarrays, and currently have no plan to acquire them.

I believe WF confirmed that the 8x8 data acquired for LOS Jitter observations can also stop at 1b. [~lajoie]  can probably confirm (although that's a bit off topic for this ticket).

stscijgbot commented 3 years ago

Comment by Mike Swam: I think the NIRCam TA exposure type value is 'NRC_TACQ'.

Here are the rest of the possible values, acdg to the OSF schema file (some get translations to more user-friendly strings later in Level1 code, but we'd be writing rules against the OSF values):

  **I believe this comes out as NRC_TACONFIRM in the headers.

Any of the rest of these need to stop at Level1b?

 

stscijgbot commented 3 years ago

Comment by Howard Bushouse: If the NRC_TACONF (aka NRC_TACONFIRM) exposures are taken using the same subarrays as NRC_TACQ, then they should be skipped too. All other types should receive normal calibration processing.

stscijgbot commented 3 years ago

Comment by Charles-Philippe Lajoie: Correct, the 8x8 LOS jitter observation need only processing up to Level-1b (see [JWSTDMS-442|http://example.com])

stscijgbot commented 3 years ago

Comment by John Stansberry: OK - you guys got me. NRC_TACONFIRM are full-frame images that are used on the ground to determine the final position of the target behind the occulters, and are not used on-board by OSS. We do have all necessary calibration files to process these full-frame images.

So: NRC_TACQ => stop at level 1b NRC_TACONFIRM => process beyond level 1b using the Coron pipeline.

The TACONFIRM images are never dithered, and the pre- and post-slew confirmation images should not be combined. However it would be best if we could get the distortion solutions applied (not sure if that's in level 2 or level 3...).

 

Re. the reset of the exposure types listed above by Mike Swam, all should be processed beyond level 1b.

 

stscijgbot commented 3 years ago

Comment by Howard Bushouse: NRC_TACONFIRM will always get processed using the regular imaging calibration pipelines up through level 2b (calwebb_detector1, calwebb_image2 pipelines). They will not trigger level 3 processing and hence there's no worry about before and after exposures getting combined. The only piece that's currently missing is that all TA-like exposures do NOT get resampled/drizzled in calwebb_image2. I believe there's another ticket open somewhere for the INS CalWG to consider whether we want to (or can) apply resampling to individual TA images at the end of calwebb_image2 (as is done for science imaging).

stscijgbot commented 3 years ago

Comment by Howard Bushouse: Given the fact that this ticket was originally created to simply be a record of subarrray names for NIRCam coronagraphic observations and has since devolved into a request to not calibrate any and all NRC_TACQ exposures, I think it would be very useful if someone were to file a new separate ticket somewhere (NOT in the JP project, because it has nothing to do with work in the Cal pipeline) that simply requests no calibration processing for NRC_TACQ exposures. It'll be a nightmare in another 6-12 months to try to trace back to the origin of the request to not calibrate NRC_TACQ exposures, because the only place it appears is buried way down here in the last few comments of a totally unrelated ticket. And for the future, it would be very useful to please constrain any given ticket to ONE topic only.

stscijgbot commented 3 years ago

Comment by Alicia Canipe: Hi [~bushouse] , I agree. [~cracraft] and I were just talking about what to with this ticket when we were reviewing old tickets. There have been a lot of TA discussions (e.g., https://jira.stsci.edu/browse/JSOCINT-209) and it's on my to-do list to double check all the tickets in different places, but I think that topic is separate from the summary title of this ticket. [~rdiaz] what would you like to do with this ticket?

stscijgbot commented 3 years ago

Comment by Howard Bushouse: For what it's worth, JSDP-1639 is being worked right now, which will include in it the changes necessary to skip calibration for all NRC_TACQ exposure. But it too started out on a different topic, which involves not calibrating science exposures taken in WFSC_LOS_JITTER mode and will include the NRC_TACQ work along with it.