Keck-DataReductionPipelines / MosfireDRP

http://keck-datareductionpipelines.github.io/MosfireDRP
10 stars 13 forks source link

Issue in wavelength calibration #148

Open itjung opened 3 years ago

itjung commented 3 years ago

I am trying to reduce my MOSFIRE data, and I encountered an issue in a wavelength calibration step. I have four masks to reduce, and DRP crashes for one of them. The other three have been properly reduced. DRP stops after/during an interactive wavelength calibration, and the error message is added below. I tried to search for similar issues here, but I couldn't. Can anyone share an experience on the issue?


2021-05-14 00:21:48,159 - Wavelength. fit_outwards_refit - INFO: Computing 0 spectrum at 89 2021-05-14 00:22:01,837 - Wavelength. fit_lambda_helper - INFO: S12] TOOK: 214 s 2021-05-14 00:22:11,235 - Wavelength. fit_lambda_helper - INFO: S21] TOOK: 92 s 2021-05-14 00:22:13,825 - Wavelength. fit_lambda_helper - INFO: S18] TOOK: 99 s 2021-05-14 00:22:49,558 - Wavelength. fit_lambda_helper - INFO: S16] TOOK: 188 s 2021-05-14 00:23:29,749 - Wavelength. fit_lambda_helper - INFO: S19] TOOK: 174 s 2021-05-14 00:23:31,478 - Wavelength. fit_lambda_helper - INFO: S20] TOOK: 174 s 2021-05-14 00:23:34,324 - Wavelength. fit_lambda_helper - INFO: S23] TOOK: 169 s 2021-05-14 00:23:40,921 - Wavelength. fit_lambda_helper - INFO: S25] TOOK: 157 s 2021-05-14 00:23:41,475 - Wavelength. fit_lambda_helper - INFO: S22] TOOK: 178 s 2021-05-14 00:23:48,993 - Wavelength. fit_lambda_helper - INFO: S03] TOOK: 412 s 2021-05-14 00:23:52,509 - Wavelength. fit_lambda_helper - INFO: S26] TOOK: 124 s 2021-05-14 00:24:11,445 - Wavelength. fit_lambda_helper - INFO: S24] TOOK: 192 s multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "/Users/ijung1/opt/anaconda3/envs/mospy_2018_macos/lib/python3.6/multiprocessing/pool.py", line 119, in worker result = (True, func(*args, *kwds)) File "/Users/ijung1/opt/anaconda3/envs/mospy_2018_macos/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar return list(map(args)) File "/Users/ijung1/opt/anaconda3/envs/mospy_2018_macos/lib/python3.6/site-packages/MOSFIRE-1.0.dev0-py3.6.egg/MOSFIRE/Wavelength.py", line 410, in fit_lambda_helper start, bottom, top, slitno) File "/Users/ijung1/opt/anaconda3/envs/mospy_2018_macos/lib/python3.6/site-packages/MOSFIRE-1.0.dev0-py3.6.egg/MOSFIRE/Wavelength.py", line 2337, in fit_outwards_refit params = sweep(positions) File "/Users/ijung1/opt/anaconda3/envs/mospy_2018_macos/lib/python3.6/site-packages/MOSFIRE-1.0.dev0-py3.6.egg/MOSFIRE/Wavelength.py", line 2324, in sweep return {'coeffs': cfits, 'delts': delt, 'lambdaRMS': UnboundLocalError: local variable 'delt' referenced before assignment """

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "Driver.py", line 36, in Wavelength.fit_lambda(maskname, band, obsfiles, obsfiles,waveops) File "/Users/ijung1/opt/anaconda3/envs/mospy_2018_macos/lib/python3.6/site-packages/MOSFIRE-1.0.dev0-py3.6.egg/MOSFIRE/Wavelength.py", line 365, in fit_lambda solutions = p.map(fit_lambda_helper, list(range(len(bs.ssl)))) File "/Users/ijung1/opt/anaconda3/envs/mospy_2018_macos/lib/python3.6/multiprocessing/pool.py", line 266, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "/Users/ijung1/opt/anaconda3/envs/mospy_2018_macos/lib/python3.6/multiprocessing/pool.py", line 644, in get raise self._value UnboundLocalError: local variable 'delt' referenced before assignment

joshwalawender commented 3 years ago

@itjung Can you send me a minimal data set to reduce the mask which is failing? Perhaps you can reply here with a download link for the data. Hopefully I'll be able to reproduce the problem here and find a solution.

thanks, Josh

itjung commented 3 years ago

@joshwalawender Thank you so much for finding this! I am sharing the Google drive link for the zip file (https://drive.google.com/file/d/1MKE3bkkQ_ZB1AMMMVEj6fwHYHyqUDgCf/view?usp=sharing). This contains flat and arc images with a pair of science frames. I ran DRP with this minimal data set, which provided the same error message. So I think that you can use it for reproducing the problem.

Thanks! Intae Jung

swkimastro commented 2 years ago

@joshwalawender Hi, I'm a student working with Intae. This problem occurs when dealing with a standard star s1608009985819036288 and I believe I found out why. This is because the bottom value is larger than the top value of slitedge. Therefore, an empty value is assigned for the ll=lambdas[1].data[bot:top,:] of Rectify.py L381. This results in an error when assigning lmid=ll[ll.shape[0]//2,:] at L380, which finally results in an unassigned 'delt' variable.

I solved it by brutally making the bottom smaller than the top from slitedge.

Following is the least reproducible code, could you please check?

Thank you in advance! :)

Seonwoo Kim

import numpy as np from MOSFIRE import IO, CSU, Options, Filters Wavelength_file = 'lambda_solution_wave_stack_Y_m210424_0174-0245.fits' maskname = 'EGS_Y_2021A_4' band = 'Y' waveops = Options.wavelength edges, meta = IO.load_edges(maskname, band, waveops) pix = np.arange(2048) hpp = Filters.hpp[band] edgeno=14 #the standard star with error edge = edges[edgeno] tops = edge['top'](pix) bots = edge['bottom'](pix) top = int(min(np.floor(np.min(tops)), 2048)) bot = int(max(np.ceil(np.max(bots)), 0)) lambdas = IO.readfits(Wavelength_file, waveops) ll = lambdas[1].data[bot:top, :] print(bot,top) print(ll)

NicolasLaporte commented 2 years ago

Is there any news on this issue ? I am facing the same error when reducing a MOSFIRE dataset. Many thanks in advance, Nicolas