As of Feb 2022, there's missing docs explaining how to add new sensors. This should be added to documentation to help other developers pick up the project.
The overview of the process (snarfed from Mitch), assuming a sensor called blah:
create an insar/sensors/blah.py for your the sensor that implements METADATA / get_data_swath_info / acquire_source_data (like the other sensors, examples are RSAT2, ALOS)
source_path is the path to either the raw_dir of data or an archive file (from the input file list???)
dst_dir is the some/dir/path/raw_data/{scene_date} dir
Add it to _sensors in insar/sensors/data.py (and other relevant parts like identify_data_source())
add processing code to insar/process_blah_slc.py (the bash port)
then run the processing code on a GAMMA enabled system to generate real outputs
add and implement tests/test_process_blah_slc.py (& add test data if necessary)
product_path specifies whatever dir(s) are returned from acquire_source_data(), which may be the root of an extracted archive file. This path can affect file searches (which may be painful to debug on a production machine)
add insar/workflow/luigi/blah.py with tasks for running the processing code (similar to rs2/s1/etc in that module)
in insar/workflow/luigi/multilook.py update CreateMultilook.requires() to add a case for the new sensor (multilook is what depends on the sensor-specific SLC processing tasks)
in insar/workflow/luigi/resume.py update ReprocessSingleSLC.run() for the new sensor
Update insar/project.py
test the production code works on Gadi/a GAMMA enabled system
Ask the InSAR team to verify the results of the real data run
Update the relevant README docs for the new sensor.
ALOS or RS2 are useful as templates (they're the simple/normal cases, whereas - S1 is an exceptional case w/ more complicated code due to subswaths and bursts & it has its own special coregistration).
Parts 4/5 above will be documented to some extent (Mitch has workflow documentation in the works, but there's no reference to the code yet - but can/should add it)
Open questions:
Which document should this content be placed in?
Should placeholder comments be added to the py modules to assist development?
Does anything need to be mentioned for local testing?
What docs should be added regarding testing new sensors with real data on gadi?
As of Feb 2022, there's missing docs explaining how to add new sensors. This should be added to documentation to help other developers pick up the project.
The overview of the process (snarfed from Mitch), assuming a sensor called
blah
:insar/sensors/blah.py
for your the sensor that implementsMETADATA / get_data_swath_info / acquire_source_data
(like the other sensors, examples are RSAT2, ALOS)source_path
is the path to either the raw_dir of data or an archive file (from the input file list???)dst_dir
is thesome/dir/path/raw_data/{scene_date}
dir_sensors
ininsar/sensors/data.py
(and other relevant parts likeidentify_data_source()
)insar/process_blah_slc.py
(the bash port)tests/test_process_blah_slc.py
(& add test data if necessary)product_path
specifies whatever dir(s) are returned fromacquire_source_data()
, which may be the root of an extracted archive file. This path can affect file searches (which may be painful to debug on a production machine)insar/workflow/luigi/blah.py
with tasks for running the processing code (similar to rs2/s1/etc in that module)insar/workflow/luigi/multilook.py
updateCreateMultilook.requires()
to add a case for the new sensor (multilook is what depends on the sensor-specific SLC processing tasks)insar/workflow/luigi/resume.py
updateReprocessSingleSLC.run()
for the new sensorinsar/project.py
ALOS or RS2 are useful as templates (they're the simple/normal cases, whereas - S1 is an exceptional case w/ more complicated code due to subswaths and bursts & it has its own special coregistration).
Parts 4/5 above will be documented to some extent (Mitch has workflow documentation in the works, but there's no reference to the code yet - but can/should add it)
Open questions:
gadi
?