ACHMartin / seastar_project

4 stars 0 forks source link

Add a reader for the OSCAR data to automatically assigned channels Fore, Aft, Mid #167

Closed ACHMartin closed 1 year ago

ACHMartin commented 1 year ago

Automatically assigned channels number to Fore, Aft, Mid, ...

DavidMcCann-NOC commented 1 year ago

One potential issue with combining all datasets and then using xr.DataArray.sel to select the right look direction is how to handle passing one look direction back and forth between the computation functions. E.g.:

level1.sel(Antenna='Fore') = ss.oscar.level1.do_something(level1.sel(Antenna='Fore'))

Wouldn't work as the left side of the assignment is a function. There must be a way round this but I haven't found it yet

ACHMartin commented 1 year ago

Good point. I was wondering if assign could do the job, but I am not sure: https://docs.xarray.dev/en/stable/generated/xarray.Dataset.assign.html?highlight=assign or perhaps using merge or concat.

None of these solutions will be very pretty. So probably better to park this issue before we have a better solution

DavidMcCann-NOC commented 1 year ago

Agreed. The easy route for the time being would be to do the L1 processing chain on each dataset and then merge them together afterwards into a unified level1 dataset with antenna angle dimension. Using the nan-replacement method in #165 we could have null variables in the mid-side of the combined dataset whenever we haven't calculated L1 products in that look direction.

DavidMcCann-NOC commented 1 year ago

I've started to build some functions that find the fore/aft/mid triplets of .nc files in a given folder and then load them in to datasets. As part of that I've started to look at how to do the L1 processing on a combined dataset with antenna dimension (i.e. after calling merge_beams), rather than working on each dataset separately and then merging before L2 processing begins.

So far merge doesn't seem to do the job, or at least it isn't working as I would like it to. I think merge is the correct function rather than concat - the latter is for combining datasets across a new dimension, wheras what we need is to append a dataset with newly calculated variables.

This could combine with issue #182 as it might be that returning DataArrays from the L1 functions rather than Datasets would make their insertion into the combined L1 dataset more straightforward

DavidMcCann-NOC commented 1 year ago

I have a way forward, slightly different way of doing things but if the datasets are in a list then processing can be done on each antenna direction one at a time and then the whole lot is concatenated together into a proper xr.Dataset along the Antenna dimension at the end.

ACHMartin commented 1 year ago

I have a way forward, slightly different way of doing things but if the datasets are in a list then processing can be done on each antenna direction one at a time and then the whole lot is concatenated together into a proper xr.Dataset along the Antenna dimension at the end.

So to refer to the initial issue, the idea would be to do: for ii, ant in enumerate([fore, mid, aft]): list[ii] = ss.oscar.level1.do_something(level1.sel(Antenna='Fore')) then level1['new_field'] = xr.merge(list)

Is it correct?

DavidMcCann-NOC commented 1 year ago

Almost - I'm just putting the last few bits together to commit it to my branch, won't be long (i need to enact your comments on the last pull request and merge that first, otherwise the commits will get added to it, happy to do that but would rather keep them separate).

It will work like: oscar_path = "D:\data\SEASTAR\SEASTARex\Data\Metasensing\OSCAR\Brest_Sample_Oct_13_2022_precompute\"

file_time_triplets = ss.utils.tools.find_file_triplets(oscar_path) # Finds the three antenna files associated with an aquisition ds = ss.utils.readers.load_OSCAR_data(oscar_path, file_time_triplets[0][1]) # reads in a set of triplets into a dict antenna_ident = ss.utils.tools.antenna_idents(ds) # Identifies the antennas in each dataset based on processed doppler ds = fill_missing_variables(ds, antenna_ident) # Sorts out the fact that the mid antenna doesn't have slave variables, fills with nan

then: for i in range(len(ds)): ds[i] = ss.oscar.level1.check_antenna_polarization(ds[i]) ds[i] = ss.oscar.level1.compute_multilooking_Master_Slave(ds[i], window=7) ds[i]['Baseline'] = ss.oscar.level1.compute_antenna_baseline(0.2) ds[i] = ss.oscar.level1.compute_antenna_azimuth_direction(ds[i], antenna=antenna_ident[i]) ds[i] = ss.oscar.level1.compute_time_lag_Master_Slave(ds[i], options='from_SAR_time') ds[i] = ss.oscar.level1.compute_radial_surface_velocity(ds[i])

So then we have the three beams processed and ready to be merged into a unified L1 dataset to be passed to the L2 processor.

The '0' index in ds = ss.utils.readers.load_OSCAR_data(oscar_path, file_time_triplets[0][1]) refers to the first beam triplet, so if we have multiple acquisitions in the same folder this index can be looped through

DavidMcCann-NOC commented 1 year ago

closed with pull request #192