This is a much compact version of the pipeline. With just one script, the pipeline does all the job until the data cubes extraction, including choosing the right observing mode, reducing both blue and red, and creating the directories to save the reduced data. In that way, it's only required to do reduce_data.py raw_data and all the data will be reduced and saved. Also, this includes reading the data reduction pipeline (DRP) setup parameters from independent .json files instead of a list of dictionaries at the beginning of the scripts. In summary, these are the main changes:
Data reduction in one script reduce_data.py for both blue and red arms (instead of reduce_blue_data.py and reduce_red_data.py).
Implemented the data classification with data_classifier.py (located in pipeline/src/) that is loaded in reduce_data.py. This was previously done in generate_metadata.py and then running the generated files save_blue_data.py and save_red_data.py.
reduce_data.py creates the directories for reduced data (reduc_red and reduc_blue) if they do not previously exist.
There are two .json templates with the DRP setup parameters, params_class.json and params_ns.json for the classical and nod-and-shuffle observing modes respectively. reduce_data.py checks automatically the observing mode of the data and choose the corresponding .json file for the reduction.
Updated steps.py and .gitignore accordingly.
Removed a few defunct scripts.
Added a sentence at the beginning of the README.md so the users know this info is outdated for our versions (should do a new one at dome point)
This is a much compact version of the pipeline. With just one script, the pipeline does all the job until the data cubes extraction, including choosing the right observing mode, reducing both blue and red, and creating the directories to save the reduced data. In that way, it's only required to do
reduce_data.py raw_data
and all the data will be reduced and saved. Also, this includes reading the data reduction pipeline (DRP) setup parameters from independent .json files instead of a list of dictionaries at the beginning of the scripts. In summary, these are the main changes:reduce_data.py
for both blue and red arms (instead ofreduce_blue_data.py
andreduce_red_data.py
).data_classifier.py
(located inpipeline/src/
) that is loaded inreduce_data.py
. This was previously done ingenerate_metadata.py
and then running the generated filessave_blue_data.py
andsave_red_data.py
.reduce_data.py
creates the directories for reduced data (reduc_red
andreduc_blue
) if they do not previously exist..json
templates with the DRP setup parameters,params_class.json
andparams_ns.json
for the classical and nod-and-shuffle observing modes respectively.reduce_data.py
checks automatically the observing mode of the data and choose the corresponding.json
file for the reduction.steps.py
and.gitignore
accordingly.README.md
so the users know this info is outdated for our versions (should do a new one at dome point)