spacetelescope / jwst

Python library for science observations from the James Webb Space Telescope
https://jwst-pipeline.readthedocs.io/en/latest/
Other
558 stars 164 forks source link

NIRCam testing of the Image3 pipeline #2963

Closed stscijgbot closed 5 years ago

stscijgbot commented 5 years ago

Issue JP-492 was created by Alicia Canipe:

NIRCam testing of the Image3 pipeline

stscijgbot commented 5 years ago

Comment by Leonardo Ubeda: I placed one problem example in central storage at

/user/lubeda/adrizzproblems 

The file code.txt has the code that I am running. The file lmc-f200w-test-10_i2d.fits is the output that I obtain.

The alignment is not correct. I have seen this problem with images simulated using the SWC, but not with the LWC.

I obviously tried multiple variations of the parameters and file order in the association JSON file.

The problem might be in the line

im3.tweakreg.expand_refcat = True

but I am not sure.  

 

stscijgbot commented 5 years ago

Comment by James Davies: [~lubeda], I looked at your problem example. If I run the dataset through with {{tweakreg}} turned off, it produces an output mosaic where the different dithers are rotated with respect to one another.

{code:python} collect_pipeline_cfgs ./config strun config/calwebb_image3.cfg association-f200w.json --steps.tweakreg.skip=True {code}

!Screen Shot 2018-12-28 at 11.37.39 AM.png|thumbnail!

This tells me that the WCS in the original images are not aligned, i.e. the simulator has introduced a rotation from one pointing to the next.

If I turn tweakreg on, and increase the search radius for finding the same star offset in the different frames to 2 arcsec, it produces a reasonable result.

{code:python} strun config/calwebb_image3.cfg association-f200w.json --steps.tweakreg.searchrad=2. {code}

!Screen Shot 2018-12-28 at 11.30.00 AM.png|thumbnail!

So I think this is an issue with the simulated data, not the pipeline code.

stscijgbot commented 5 years ago

Comment by Leonardo Ubeda: Thank you for looking into this. I saw that same pattern during my tests. I understand that the WCS is assigned to the images in the Image2 step. Could it be that the problem is in Image2 instead of the simulated data ? My concern is that  I did not have the same problem while running the pipeline using LWC simulated images. I am just trying to narrow the source of error before contacting the Mirage developer. Thank you.

stscijgbot commented 5 years ago

Comment by James Davies: That's possible. If you have the {{_rate.fits}} images for this simulated dataset, we can look to see if there's something weird going on in the stage 2 pipeline.

stscijgbot commented 5 years ago

Comment by Leonardo Ubeda: I will get back to you with those images.

stscijgbot commented 5 years ago

Comment by James Davies: [~lubeda], I looked at your problem example. If I run the dataset through with {{tweakreg}} turned off, it produces an output mosaic where the different dithers are rotated with respect to one another.

{code:python} collect_pipeline_cfgs ./config strun config/calwebb_image3.cfg association-f200w.json --steps.tweakreg.skip=True {code}

!Screen Shot 2018-12-28 at 11.37.39 AM.png|thumbnail!

This tells me that the WCS in the original images are not aligned, i.e. the simulator has introduced a rotation from one pointing to the next.

If I turn {{tweakreg}} on, and increase the search radius for finding the same star offset in the different frames to 2 arcsec, it produces a reasonable result.

{code:python} strun config/calwebb_image3.cfg association-f200w.json --steps.tweakreg.searchrad=2. {code}

!Screen Shot 2018-12-28 at 11.30.00 AM.png|thumbnail!

So I think this is an issue with the simulated data, not the pipeline code.

stscijgbot commented 5 years ago

Comment by Leonardo Ubeda: Sorry for the delay. The _rate.fits files are now placed in central storage at

/user/lubeda/adrizzproblems

 

Thanks for your help.

 

stscijgbot commented 5 years ago

Comment by Leonardo Ubeda: [~jdavies], I was wondering if you had a chance to look into this problem. Before I involve the simulator developer, I wanted to know your opinion. Thank you.

stscijgbot commented 5 years ago

Comment by James Davies: Yes, I looked at the {{_rate}} images and they have the same WCSINFO block as the {{_cal}} images when run through through {{calwebb_image2}} to produce {{_cal}} images. They suffer the same rotation problem. Perhaps this is a problem with the simulations. I.e. when the dithers are generated and the declination changes, the ROLL_REF will also have to change. It does, but perhaps the sign on the change is flipped? Just a guess. But it is introducing a rotation from one dither to the next. Each pointing is rotated relative to the others.

stscijgbot commented 5 years ago

Comment by Leonardo Ubeda: [~jdavies]

I am working on aligning and mosaicking images simulated using the Long Wavelength Channel. Filters F444W and F356W presented no problems. However, when I use images simulated with filter F277W I get the Python error shown below.

I place the association file and the level2 calibrated images in central storage at:

/user/lubeda/adrizzproblem-02

Do yo know what is causing this problem ? Is there a quick workaround ?

Thanks for your help with this problem.

 

{color:#de350b}------------------------------------  pipeline crashes with this error{color}

{color:#de350b}Traceback (most recent call last):{color}

{color:#de350b}  File "python_pipeline_lmc.py", line 150, in {color}

{color:#de350b}    im3.run(jsonfile){color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/stpipe/step.py", line 400, in run{color}

{color:#de350b}    step_result = self.process(*args){color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/pipeline/calwebb_image3.py", line 70, in process{color}

{color:#de350b}    input_models = self.tweakreg(input_models){color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/stpipe/step.py", line 400, in run{color}

{color:#de350b}    step_result = self.process(*args){color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/tweakreg/tweakreg_step.py", line 135, in process{color}

{color:#de350b}    sigma=self.sigma{color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/tweakreg/imalign.py", line 212, in align{color}

{color:#de350b}    sigma=sigma{color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/tweakreg/wcsimage.py", line 1259, in align_to_ref{color}

{color:#de350b}    tolerance=tolerance){color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/tweakreg/wcsimage.py", line 1041, in match2ref{color}

{color:#de350b}    separation=separation{color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/stsci.stimage-0.2.2-py3.5-macosx-10.6-x86_64.egg/stsci/stimage/init.py", line 247, in xyxymatch{color}

{color:#de350b}    nreject){color}

{color:#de350b}RuntimeError: Number of output coordinates exceeded allocation (218){color}

 

{color:#de350b}------------------------------------  pipeline crashes with this error{color}

stscijgbot commented 5 years ago

Comment by Leonardo Ubeda: edit to the text

The problem is solved by increasing the value of the tweakreg.snr_threshold parameter.

The question then is: how can we avoid the pipeline from crashing? I do not know the threshold value a priori. Does the error text "exceeded allocation" mean a memory problem ?

edit to the text

 

[~jdavies]

I am working on aligning and mosaicking images simulated using the Long Wavelength Channel. Filters F444W and F356W presented no problems. However, when I use images simulated with filter F277W I get the Python error shown below.

I place the association file and the level2 calibrated images in central storage at:

/user/lubeda/adrizzproblem-02

Do yo know what is causing this problem ? Is there a quick workaround ?

Thanks for your help with this problem.

 

{color:#de350b}------------------------------------  pipeline crashes with this error{color}

{color:#de350b}Traceback (most recent call last):{color}

{color:#de350b}  File "python_pipeline_lmc.py", line 150, in {color}

{color:#de350b}    im3.run(jsonfile){color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/stpipe/step.py", line 400, in run{color}

{color:#de350b}    step_result = self.process(*args){color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/pipeline/calwebb_image3.py", line 70, in process{color}

{color:#de350b}    input_models = self.tweakreg(input_models){color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/stpipe/step.py", line 400, in run{color}

{color:#de350b}    step_result = self.process(*args){color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/tweakreg/tweakreg_step.py", line 135, in process{color}

{color:#de350b}    sigma=self.sigma{color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/tweakreg/imalign.py", line 212, in align{color}

{color:#de350b}    sigma=sigma{color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/tweakreg/wcsimage.py", line 1259, in align_to_ref{color}

{color:#de350b}    tolerance=tolerance){color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/tweakreg/wcsimage.py", line 1041, in match2ref{color}

{color:#de350b}    separation=separation{color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/stsci.stimage-0.2.2-py3.5-macosx-10.6-x86_64.egg/stsci/stimage/init.py", line 247, in xyxymatch{color}

{color:#de350b}    nreject){color}

{color:#de350b}RuntimeError: Number of output coordinates exceeded allocation (218){color}

 

{color:#de350b}------------------------------------  pipeline crashes with this error{color}

stscijgbot commented 5 years ago

Comment by James Davies: Interesting. [~mcara], any ideas what could be causing {{xyxymatch}} to barf?

mcara commented 5 years ago

@jdavies-st xyxymatch is very heavy on memory and most likely this is a memory allocation error. I guess, this is the nature of the xyxymatch algorithm (though I am not familiar with the particular code used here).

With regard to threshold: I do not know how to do this automatically and the classical drizzlepac.tweakreg was not intended to be run on auto-pilot. The threshold probably should depend on how faint are the sources in the observed field and what is the SNR of the observation (which int turn would depend on readout noise, exposure time, and maybe other instrument characteristics, ...). How can you know how bright are the stars in an image before you detect them? What is the histogram of star distribution? This is tough... A simplistic approach would be to keep iterating: start with a huge threshold and keep lowering it (i.e., halving it) until you get the desired number of detections (but without matching, how do you know which ones are real and which are CRs, etc.?) This kind of approach would be quite time-consuming.

I believe the best approach would be for the pipeline to estimate SNR per instrument per exposure using information such as readout noise, exposure time and adjust the threshold to something that would detect "reliable" stars, let's say 5*sigma (instrument scientists should experiment and find the most optimal value). Now, in some images this could mean too many detections which would overwhelm xyxymatch. However, this should not be an issue since https://github.com/spacetelescope/jwst/pull/2706: simply set brightest to a reasonable number (such as current default of 100).

Now, the brightest will not help if expand_refcat is True and if you are aligning 100s of images each one with 100 detections => a total of 10,000 sources in the reference catalog. You've got to figure out an average number of detections per image that would give a reasonable number of total sources in the reference catalog.

stscijgbot commented 5 years ago

Comment by Leonardo Ubeda: [~jdavies] and [~mcara]

I usually set 

im3.tweakreg.enforce_user_order = True;

to tell image3 to use the science images in the order that I provide. I am now using 

im3.tweakreg.enforce_user_order = False;

 

and I get the following error:

{color:#de350b}2019-01-31 13:11:26,113 - stpipe.Image3Pipeline.tweakreg - INFO - ***** jwst.tweakreg.imalign.align() started on 2019-01-31 13:11:26.113642{color}

{color:#de350b}2019-01-31 13:11:26,113 - stpipe.Image3Pipeline.tweakreg - INFO -  {color}

{color:#de350b}Traceback (most recent call last):{color}

{color:#de350b}  File "python_pipeline_lmc.py", line 150, in {color}

{color:#de350b}    im3.run(jsonfile){color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/stpipe/step.py", line 400, in run{color}

{color:#de350b}    step_result = self.process(*args){color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/pipeline/calwebb_image3.py", line 70, in process{color}

{color:#de350b}    input_models = self.tweakreg(input_models){color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/stpipe/step.py", line 400, in run{color}

{color:#de350b}    step_result = self.process(*args){color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/tweakreg/tweakreg_step.py", line 135, in process{color}

{color:#de350b}    sigma=self.sigma{color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/tweakreg/imalign.py", line 181, in align{color}

{color:#de350b}    enforce_user_order=enforce_user_order or not expand_refcat{color}

{color:#de350b}  File "/Users/lubeda/anaconda2/envs/jwst_mirage/lib/python3.5/site-packages/jwst-0.10.1a0.dev204+ge537bc42-py3.5-macosx-10.6-x86_64.egg/jwst/tweakreg/imalign.py", line 329, in max_overlap_pair{color}

{color:#de350b}    si = np.sum(m[i]){color}

{color:#de350b}IndexError: only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices{color}

 

Any ideas  on how to work around this ?

Thank you.

 

 

mcara commented 5 years ago

@leonardoubeda This new crash that you reported above could be called a bug (in the sense that it was not a bug but became one as it aged). It happens due to a combination of factors: transition from Python 2.7 to >3.5 so that division of integer numbers now results in floating point numbers AND using newest numpy. I will need to make a PR to fix this. I will try to get this fixed tonight.

mcara commented 5 years ago

@leonardoubeda The bug described in https://github.com/spacetelescope/jwst/issues/2963#issuecomment-459449539 should have been fixed in https://github.com/spacetelescope/jwst/pull/3072

stscijgbot commented 5 years ago

Comment by Leonardo Ubeda: The problem that I mentioned on 27 DEC 2018 is now solved. The simulations  were indeed wrong.

stscijgbot commented 5 years ago

Comment by Leonardo Ubeda: Is it possible to tell the pipeline step calwebb_image3 to save the shift file to the working directory ? This would be very useful for trouble shooting. Thanks. 

mcara commented 5 years ago

The new code does not have any support for shift files. However, all the transformation parameters (shifts, rotations, etc.) are printed in the log. Thus, inspecting the log file should reveal the same information that would be in a "shift file", albeit in a less succinct form.

stscijgbot commented 5 years ago

Comment by Leonardo Ubeda: OK. What is the command syntax to save the log  if I am running something like   strun config/calwebb_image3.cfg association-f200w.json --steps.tweakreg.skip={color:#000091}False

{color} Thanks.

 

stscijgbot commented 5 years ago

Comment by James Davies: Glad you solved the issue Leonardo.

Try

{{strun config/calwebb_image3.cfg association-f200w.json --steps.tweakreg.skip=False >& my.log}}

stscijgbot commented 5 years ago

Comment by Leonardo Ubeda: I am now testing the calwebb_image3 step using short wavelength channel simulated data obtained with filters  F150W and F200W.

I find a difference between the modules. Module B produces a fairly good data product. Module A does not. The attached image shows the problem.

The code I am using is

strun config/calwebb_image3.cfg association-f150w-moduleb.json --steps.tweakreg.skip=False --steps.tweakreg.searchrad=2. --steps.tweakreg.snr_threshold=4000.0 --steps.tweakreg.enforce_user_order=True --steps.tweakreg.expand_refcat=True --steps.skymatch.skymethod='global' --steps.tweakreg.fitgeometry='rscale'

 

The files are located in central storage at

/user/lubeda/astronomy/nircam/lmc/f150w 

there are two logs: modulaA.log and moduleB.log

The following lists show the XRMS and YRMS values from the fits. 

(jwst_mirage) bash-4.2$ grep XRMS moduleA-shifts.log 

2019-02-07 12:24:55,070 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.0862306    YRMS: 0.0627074

2019-02-07 12:24:55,227 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.120004    YRMS: 0.0560095

2019-02-07 12:24:55,508 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.0235626    YRMS: 0.0172296

2019-02-07 12:24:55,659 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.144077    YRMS: 0.0344797

2019-02-07 12:24:55,813 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.176316    YRMS: 0.0967449

2019-02-07 12:24:55,972 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.16833    YRMS: 0.0664161

2019-02-07 12:24:56,131 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.188678    YRMS: 0.0532111

2019-02-07 12:24:56,288 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.127199    YRMS: 0.0278657

2019-02-07 12:24:56,352 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.0969485    YRMS: 0.0391584

2019-02-07 12:24:56,416 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.0385296    YRMS: 0.0218558

2019-02-07 12:24:56,480 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.0352482    YRMS: 0.0229488

(jwst_mirage) bash-4.2$ grep XRMS moduleB-shifts.log 

2019-02-07 12:35:58,440 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.0142906    YRMS: 0.0147886

2019-02-07 12:35:58,597 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.0214292    YRMS: 0.0184038

2019-02-07 12:35:58,661 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.0269435    YRMS: 0.0284755

2019-02-07 12:35:58,890 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.0227748    YRMS: 0.0305181

2019-02-07 12:35:58,948 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.0241679    YRMS: 0.031532

2019-02-07 12:35:59,008 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.0239153    YRMS: 0.0264235

2019-02-07 12:35:59,161 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.0217435    YRMS: 0.0245855

2019-02-07 12:35:59,316 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.0341368    YRMS: 0.0298886

2019-02-07 12:35:59,474 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.0292996    YRMS: 0.027827

2019-02-07 12:35:59,634 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.020472    YRMS: 0.0245211

2019-02-07 12:35:59,794 - stpipe.Image3Pipeline.tweakreg - INFO - XRMS: 0.0248224    YRMS: 0.0315859

Do you have any idea why it works for one module and not for the other ?

Thank you.

!comparison-modA-modB.png!

stscijgbot commented 5 years ago

Comment by Mihai Cara: [~lubeda], what is the difference between "mod A" and "mod B"? How did you produce these simulations?

stscijgbot commented 5 years ago

Comment by Leonardo Ubeda: The simulated images were generated using the NIRCam simulator called Mirage. The GitHub repository is in 

[https://github.com/spacetelescope/mirage]

modA represents the images simulated using Module A of the NIRCam instrument

Please see:

https://jwst.stsci.edu/instrumentation/nircam 

for more information on the instrument.

I decided to separate the files according to Module when I realized that if I used both Modules in the same  association, the final data product was always wrong. There may be a difference in a reference file, perhaps ?

stscijgbot commented 5 years ago

Comment by Leonardo Ubeda: [~mcara] did you have a chance to take a look at this problem ? I would appreciate your input. Thank you.

stscijgbot commented 5 years ago

Comment by Mihai Cara: [~lubeda] Sorry, I did not look at all into this issue. Based on what you have described, I do not see there is anything indicating a failure of tweakreg itself. I am not familiar with MIRaGe and it would take me a lot of time to familiarize with it. Maybe it would be a good idea to ask [~hilbert] to take a look at this issue as well. It could be the simulator, reference files, was anything else done to the file ?, etc.

Alternatively, if you could narrow down the issue and have an example that is limited to tweakreg itself and that indicates that the issue is with tweakreg, that would be helpful.

stscijgbot commented 5 years ago

Comment by Mihai Cara: Also, recently Warren tested an almost identical code (tweakwcs) for use with HST images (tests were done on images from the two WFC chips) and he told me yesterday that he did not detect a single issue with alignment (I believe they used a set of the order of >100,000 images). Since the alignment part is identical in both jwst.tweakreg and tweakwcs packages, I doubt (but do not/can't exclude 100%, of course) there is anything wrong with tweakreg. This is why I have a strong feeling it is likely an issue with images themselves but I am not familiar with how they are produced/simulated.

stscijgbot commented 5 years ago

Comment by James Davies: If you run the pipeline with {{tweakreg}} turned off for just the Module A association, it produces a mosaic with 2 distinct pointings that are about 0.5 arcsec offset from each other.  Is this what was intended with the simulated images in the association?

Have you tried running the pipeline with {{tweakreg}} search tolerance increased to recover this misalignment?

stscijgbot commented 5 years ago

Comment by James Davies: Another suggestion would be to verify that whatever SIAF and distortion files were used in {{mirage}} to plonk sources in the image, these need to be the same ones that are currently released for use in CRDS and in SDP for all this to work.  I think currently what is used in {{mirage}} are the unreleased distortion reffiles and SIAF info for NIRCAM.

stscijgbot commented 5 years ago

Comment by Leonardo Ubeda: [~jdavies] [~mcara] I can confirm something went wrong with the NIRCam SW module A simulations. I will have to trace the problem. Unfortunately, it is not straight forward. 

stscijgbot commented 5 years ago

Comment by Leonardo Ubeda: Hi [~jdavies] and [~mcara] 

After several weeks of testing and some productive talks with the developer of the simulation tool, I finally got the solution to this problem and thought that you might want to know. 

The problem was neither in the simulations nor in the pipeline, the problem lies in the mixing of two different types of distortion files. 

During stage 2 of the NIRCam pipeline, I had to point to a different distortion file as in the following example

wcs_reffile = 'NRCA1_FULL_from_pysiaf_distortion.asdf'

I assign a different file according to the NIRCam detector. 

Once the stage 2 images are produced, I generate the final mosaic and it looks great. So far I tested modules A and B of filters F150W and F200W.

Thank you for your help.

Leonardo

 

 

stscijgbot commented 5 years ago

Comment by James Davies: Excellent! Glad you were able figure things out. I guess once the new NIRCam distortion files are delivered make it into the default CRDS context, then we shouldn't have to worry about this.

stscijgbot commented 5 years ago

Comment by Alicia Canipe: [~lubeda]  thanks for working on this! Where did the good reference file come from (the one that worked correctly)? The ones that didn't work are the ones the pipeline uses by default?

stscijgbot commented 5 years ago

Comment by James Davies: The issue was that the data was simulated with a new (different) distortion reference file than has been delivered to CRDS and is in use in the pipeline. That accounted for differences seen. The simulated data needs to be generated with the same distortion model that is used in the pipeline for things to work.

So when the new distortion reference files get delivered and ingested into CRDS, then the pipeline will use those, and the Mirage simulations should then use those too.

stscijgbot commented 5 years ago

Comment by Alicia Canipe: Ah I see. Sorry, I misunderstood. Thanks, [~jdavies]!

stscijgbot commented 5 years ago

Comment by Alicia Canipe: Issue reported in the ticket was resolved.

stscijgbot commented 5 years ago

Comment by Alicia Canipe: I'm going to close this issue, since the errors in the ticket were resolved. I will ask the NIRCam team to open a new ticket to report testing of the Image3 pipeline for Build 7.3.

stscijgbot commented 5 years ago

Comment by Leonardo Ubeda: [~acanipe] May I use this ticket to report a new problem with Image3 or is there another one ? Thank you.

stscijgbot commented 5 years ago

Comment by Alicia Canipe: [~lubeda] sorry for the delay in my response, I was out of town. Let's start a new ticket, I'll E-mail you with instructions.

stscijgbot commented 4 years ago

Comment by Stacy (Anastasia) Smith: Previously contained component: "validation_testing".