Open gijsverdoeskleijn opened 3 years ago
Files in older /IJ_FF/ (https://drive.google.com/drive/folders/1ODpA7jbZOAkxzMDgbcsJTNg2l0hAAqX7) are spectroscopic flats in the IJ band. I am not sure what is the difference between the different files as the names refer to different number of mirrors (?): IJ_FF_nummirrors=0_1.fits IJ_FF_nummirrors=0_2.fits IJ_FF_nummirrors=0_3.fits IJ_FF_nummirrors=0_4.fits IJ_FF_nummirrors=0_5.fits IJ_FF_nummirrors=0_6.fits IJ_FF_nummirrors=0_7.fits IJ_FF_nummirrors=0_8.fits IJ_FF_nummirrors=0_9.fits IJ_FF_nummirrors=0_10.fits
Files in folder /IJ_Setup/ (https://drive.google.com/drive/folders/1BeSnoCC6Y_6FXVvu61KWCoxavRRen0el) are: IJ_FF_pinh.fits : a spectroscopic flat through equally spaced pinholes IJ_mpia_pinh.fits : seems empty!!!? XeArKrNe+FPI_0.8-2.6mum_1.0_x_lam_step_sim_IJ_shift-0.fits : MPIA line lamp spectra, for wavelength calibration freq_comb_1.0_x_lam_step_sim_IJ_shift-0.fits : artificial frequency comb spectra (to test the output of the wavelength calibration from PyReduce after we get it running) gd153_IJ_shift-0.fits : not sure what this is besides that it is a long slit.
Thanks! Then how is this file 'IJ_mpia_pinh.fits' different from 'XeArKrNe+FPI_0.8-2.6mum_1.0_x_lam_step_sim_IJ_shift-0.fits'?
The first is line lamps through the pinholes whereas the second one is line lamps through the entire slit (Memo to myself: use more descriptive filenames ;-) ). I can't have a look right now, but the lines in the latter should be along the entire slit, whereas in the first there should be only points along the slit
Ok. That's true they are along the entire slit for 'XeArKrNe+FPI_0.8-2.6mum_1.0_x_lam_step_sim_IJ_shift-0.fits'. It is the 'IJ_mpia_pinh.fits' that was not clear what it is. For IJ_mpia_pinh.fits I still see nothing. Yes, we may need to create another one for PyReduce if it cannot recognize it.
OK, I'll have to look into it again.
Below is an updated list description: (after feedback from @wkausch). All files are in the IJ band for the short slit.
Files in folder /IJ_FF/ (https://drive.google.com/drive/folders/1ODpA7jbZOAkxzMDgbcsJTNg2l0hAAqX7) are spectroscopic flats in the IJ band created with different number of mirrors. One file would be enough for PyReduce as a Master Flat. /IJ_FF/ IJ_FF_nummirrors=0_1.fits IJ_FF_nummirrors=0_2.fits IJ_FF_nummirrors=0_3.fits IJ_FF_nummirrors=0_4.fits IJ_FF_nummirrors=0_5.fits IJ_FF_nummirrors=0_6.fits IJ_FF_nummirrors=0_7.fits IJ_FF_nummirrors=0_8.fits IJ_FF_nummirrors=0_9.fits IJ_FF_nummirrors=0_10.fits
Files in folder /IJ_Setup/ (https://drive.google.com/drive/folders/1BeSnoCC6Y_6FXVvu61KWCoxavRRen0el):
/IJ_Setup/ IJ_FF_pinh.fits : a spectroscopic flat through equally spaced pinholes IJ_mpia_pinh.fits : MPIA line lamp spectra through equally spaced pinholes. Lines not visible with FITS viewer and may need to be recreated. XeArKrNe+FPI_0.8-2.6mum_1.0_x_lam_step_sim_IJ_shift-0.fits : MPIA line lamp spectra through the entire slit freq_comb_1.0_x_lam_step_sim_IJ_shift-0.fits : artificial frequency comb spectra (to test the output of the wavelength calibration from PyReduce after we get it running) gd153_IJ_shift-0.fits : Spectrum of flux standard GD153 in the IJ. (not necessary for PyReduce main tasks)
Thanks @nadsabha and @wkausch for the clarifications. This morning I understood that you plan to do order detection, rectification and wavelength calibration first on one trace in J or H with a suitable set of arclamp lines. I now see there is no simulated HK data...so H is not an option. Based on that and clarifications above, I then make this guess on what is the answer to my request: establish the list of files that PyReduce needs as input:
Please review and correct my guess on needed FITS files. Once that is done I and @hugobuddel can continue to help checking that micado.json correctly maps the FITS header keywords or hardcodes a value for these FITS files.
Lastly, if we cannot get it to work on J-band because of J-band issues, I am hesitant to do it on I band. Reason: MICADO is primarily intended for JHK spectroscopic science. So to be aware of potential risks on completing in time: @wkausch how much time would you need to make HK simulated data for above steps 1,2 and 3?
ad 2 (quick answer): PyReduce test data contain such pinhole frames, so I don't see a reason why they shouldn't work with ours
ad 3) HK is available, but it seems not to be on google drive
ad 2 (quick answer): PyReduce test data contain such pinhole frames, so I don't see a reason why they shouldn't work with ours
Great, that is a relief. Sorry that I did not recall that....I'm clearly not an expert on PyReduce.
ad 3) HK is available, but it seems not to be on google drive.
Another relief: please add them to googledrive, with a README of what is what for Nadeen.
Thanks @wkausch !
I'm trying to redo the pinhole frames now. Not sure why they're so weird.... The FF pinhole fames look nice... but I have some ideas what to try
@nadsabha: I created a new IJ-linelamp pinhole frame. You can grab it from here:
https://astro-staff.uibk.ac.at/~kausch/IJ_mpia_pinh_new.fits.gz
Please have a look. If you use -300 and 1600 as cuts you can see the pinhole dots very well. I hope that is fine for you & pyreduce ;-). There seems to be a bug in SpecCADO, so I had to find a workaround.
@gijsverdoeskleijn: if (I find out how to do) & (when I'm at the office due to connection speed) & (when I remember it) I'll upload HK stuff to google drive else please remind me
I would be better able to assist with the header conversion if we could create a single folder on google drive that contains a single full set of data to be used as input for PyReduce. It doesn't have to be perfect / complete yet; we can iterate on it. The UVES datasets all have official ESO names (that is, with just the observation date); we can use more descriptive names.
Could you make a directory like that @nadsabha ?
@nadsabha and @wkausch ,
Gentle reminder to review my draft of the list of files that shall be in the "simulation data for spectro critical algorithm prorotoypes" directory, which is a good suggestion from Hugo.
@wkausch
@nadsabha: I created a new IJ-linelamp pinhole frame. You can grab it from here:
https://astro-staff.uibk.ac.at/~kausch/IJ_mpia_pinh_new.fits.gz
Please have a look. If you use -300 and 1600 as cuts you can see the pinhole dots very well. I hope that is fine for you & pyreduce ;-). There seems to be a bug in SpecCADO, so I had to find a workaround.
@gijsverdoeskleijn: if (I find out how to do) Click on the "+ NEW" button top left:
& (when I'm at the office due to connection speed) & (when I remember it)
I'll upload HK stuff to google drive else please remind me
@wkausch : gentle reminder ;-)
Sorry for the delay @gijsverdoeskleijn. Regarding your comments/questions:
The google drive data were not meant to be representative of the entire simulated datasets. It was to get started with PyReduce in the IJ band, which once that works one can then move on to other bands. For the initial and different prototyping approach we had in 2019 and beginning of 2020, I have created a set of spectral FF data in HK, J & I band |(long and short slit) and also through the pinholes for the short slit. I used the same setup then as @wkausch, however some data have been created after my data in the past months and for consistency it is better to use the newer sets which @wkausch will upload.
Ad 2) yes IJ_FF_pinh.fits have the pinholes along the entire slit. Ad 3) I would use first the XeArKrNe+FPI_0.8-2.6mum_1.0_x_lam_step_sim_IJ_shift-0.fits because that is the more realistic one, which also matches the micado.npz file (the wavelength calibration initial guess file I'm creating). The other file freq_comb_1.0_x_lam_step_sim_IJ_shift-0.fits is an artificial line list equally spaced in wavelength intended to test accuracy of the procedure.
@hugobuddel, yes for operating PyReduce the data need to be eventually all in one directory. I didn't do it so far since that stage is not there yet and other things needed for operating PyReduce take priority. But since you need that already, I have copied the needed files to the new directory raw.
General note, I'm skeptic that we can continue using google drive this way. I shouldn't upload from my end since I have already reached 80% of my storage. (anything I create or upload takes from my limited storage space)
Thank you @nadsabha ! This list makes it easier to figure out what actually needs to be done w.r.t. the header translation.
W.r.t. using Google drive; I'm open for alternatives. I think it is very valuable to have a shared place where we can share such datasets so we are sure we all use the exact same data. For me personally, preventing discussions and effort to keep data synchronized is definitely worth the 20 euro per year that 20 GB costs at Google. (I don't particularly like Google, but for this it is good enough for me.)
For now I have renamed your folder to raw_old, created a new folder called raw (was raw2 before) and copied the 5 files from raw_old to raw. So now they are counted to my quota, since I have enough space. I deleted raw_old (your directory), but it said it only removed it from 'my view' and that you still have it. You can remove it to reclaim your space I think.
Thank you @hugobuddel for changing the files to your quota. I agree it is best to have all the data in one space accessible to all involved. What I should have mentioned is that I work personally on Dropbox and have plenty of space there in my subscription. I could then share the data folder from there if all of us are ok with that.
@wkausch : I now give back to you the lead to complete the header keyword mapping activity in support of Nadeen. The files for which it needs to work are in this googledrive dir raw. But feel free to reach out to @hugobuddel and me with questions on it.
@nadsabha : thanks for the Dropbox offer. Now that we have this googledrive dir raw in place no need to go to Dropbox.
It was to get started with PyReduce in the IJ band, which once that works one can then move on to other bands.
OK, so I'll wait with the upload of the HK data
@gijsverdoeskleijn & @hugobuddel : Can you give me a brief update how far you could come with the header keyword mapping? Then I'll prcoeed from that point on. I downloaded already the files.
@nadsabha Where's the micado.json file for the header stuff? BTW: It won't make a big difference, but I'll use Pyreduce v0.4.29, which was released Jun22.
It should be this one micado.json.
Thx. BTW: Are you using Method 1 or Method 2? https://pyreduce-astro.readthedocs.io/en/latest/instruments.html
I guess #2?
yes 2.
@gijsverdoeskleijn & @hugobuddel : Can you give me a brief update how far you could come with the header keyword mapping? Then I'll prcoeed from that point on. I downloaded already the files.
Not that much farther than Kieran got, indeed in https://github.com/astronomyk/PyReduce/blob/master/MICADO_info/micado.json , because we didn't have a clear list of raw data to which to apply this mapping to.
@nadsabha I updated the keyword stuff in the micado.json file (see the raw google folder). Unfortunately, I have not the overview which data pyreduce exactly requires, but I followed the pyreduce xshooter approach: The format-check files ("id_format" in micado.json) -I assume- are the linelamp-pinhole frames. The other files need a keyword to be identifed ("kw_xxxx") and a corrsponding value of this keyword ("id_xxxx"). Here I implemented the following allocation:
"kw_flat": "HIERARCH ESO DPR TYPE" with "id_flat": "SFLAT" (these keywords we defined in the DRLD) "kw_curvature": "HIERARCH ESO DPR TYPE" with "id_curvature": "WAVE" etc...following the xsh example (see micado.json)
In case this doesn't work you need to adapt it to the needs of pyreduce. I'm afraid I'm not a great help here since I don't know what pyreduce expects. I propose you contact Ansgar Wehrhan in case there are major troubles.
I also updated the fits headers of the simulations to contain the corresponding keywords (and renamed them also for more clarity ;-) ). They are on the google raw-dir drive:
IJ_FF_newheaders.fits: spectroscopic flatfield IJ_FF_pinh_newheaders.fits: pinhole frames with the flatfiled lamp IJ_mpia_newheaders.fits: linelamp spectrum full slit IJ_mpia_pinh_newheaders.fits: pinhole frame with the line lamps IJ_freqcomb_newheaders.fits: frequency comb spectrum ("science" target)
Hope that helps :-).
The old-new file mapping should be: [new] [old] IJ_FF_newheaders.fits = IJ_FF_nummirrors=0_1.fits IJ_FF_pinh_newheaders.fits = IJ_FF_pinh.fits IJ_mpia_newheaders.fits = XeArKrNe+FPI.......fits IJ_mpia_pinh_newheaders.fits = IJ_mpia_pinh_new.fits IJ_freqcomb_newheaders.fits = freq_comb1.0......fits
So you can remove the old ones (are stored locally at my machine)
Thank you @wkausch. I removed the duplicate older files (I also have them locally stored).
The directory raw has now the files: IJ_FF_newheaders.fits: spectroscopic flatfield IJ_FF_pinh_newheaders.fits: pinhole frames with the flatfiled lamp IJ_mpia_newheaders.fits: linelamp spectrum full slit IJ_mpia_pinh_newheaders.fits: pinhole frame with the line lamps IJ_freqcomb_newheaders.fits: frequency comb spectrum ("science" target)
@hugobuddel: I have placed the micado.json file that @wkausch modified (which you'll need for the instrument class file) in this directory PyReduceFiles under the parent directory PyReduce_DATA that we all have access to.
Perhaps we can place the files that should go into PyReduce (like micado.json
) directly in this repository. Even better if we place them directly in the location they should end up, because that's what we need to be able to run the code anyway.
@hugobuddel please see https://github.com/astronomyk/PyReduce/issues/3#issuecomment-876559991, the updated micado.json file is in this repository.
Dear @nadsabha and @wkausch ,
We had a first look at micado.json. To make progress we need your input:
Cheers, @hugobuddel and Gijs