OSOceanAcoustics / echopype

Enabling interoperability and scalability in ocean sonar data analysis
https://echopype.readthedocs.io/
Apache License 2.0
95 stars 73 forks source link

Large memory expansion when data are converted and organized into groups/EchoData object #489

Closed leewujung closed 2 years ago

leewujung commented 2 years ago

This issue is split from #407 since the symptom is similar but the underlying cause is different.

Tagging @oftfrfbf, @lsetiawan @emiliom so that we can continue the discussion here. 😃

This issue focuses on the problem that there is sometimes very large expansion of memory use when echopype is converting a file of moderate size. (vs #407 focuses on the problem when the data file itself is too large to fit into memory.)

Description

Based on @oftfrfbf and @lsetiawan 's investigations (quoted below), there may very well be two different types of things happening (or more? depending on the exact form of data):

Potential solution?

For the case with the OOI file, I wonder if we could circumvent the problem by saving the larger data variables (backscatter_r, alonghip_angle, athwartship_angle) one by one in xr.Dataset.to_zarr using mode='a', so that the max memory usage would not surpass what would be required for one of them (vs having all 3 in memory at the same time).

The case with the 95 MB file is more complicated if the reason is as described above. A couple of not particularly well-formed thoughts:

Thoughts???

leewujung commented 2 years ago

@emiliom : this and this in the xarray sparse support thread are interestingly related to our discussion on ragged array representation a couple weeks back. :)

leewujung commented 2 years ago

@oftfrfbf : I looked into the 95 MB file you linked and the problem is indeed of the same form as I suspected before:

The 3 frequency channels are configured with different pulse lengths and hence different sampling intervals.

Working from the parser_obj:

Screen Shot 2021-11-14 at 1 17 52 PM

You can see that the backscatter data from the 3 channels have the number of pings (9923) but different number of samples (1302, 2604, 10417). When converted to the actual range in meters, all 3 channels are set up to record to 500 m.

When the EchoData object is assembled, these data are merged into a 3D data cube with the shorter pings padded with NaN. The size of data for the 1st and 2nd channels will therefore be 8x and 4x, respectively. The same expansion happens for the angle data also since all 3 channels are split-beam.

leewujung commented 2 years ago

In the spirit of having everything in one place, below I printed out the same info for the 725 MB OOI file as the above, so that we are all on the same page of what the NaN-padding-induced problem is.

Screen Shot 2021-11-14 at 1 30 01 PM

Even though the backscatter data from all 3 channels have the same number of pings and samples along range, only the last channel was a split-beam, so there are no angle data from the first 2 channels.

When assembling the EchoData object, 2 slices of NaNs will be created to match the size of the last channel for each of the angles (alongship and athwartship), hence the memory expansion.

oftfrfbf commented 2 years ago

I am looking at this problem again and am trying to explore converting Numpy Arrays to Dask Arrays packaged within Xarray datasets (specifically to minimize memory expansion when multiple channels are merged here as noted in previous examples). My hope is that the chunking that Dask Arrays allow for could keep the memory profile within reasonable limits but I will report back with either a success or failure. Also, I am not sure if my thinking goes against recent notes on eliminating xarray merges in favor of numpy?

imranmaj commented 2 years ago

Also, I am not sure if my thinking goes against recent notes on eliminating xarray merges in favor of numpy?

Since dask arrays implement a good part of the numpy API, the recent work on converting xarray merges to numpy operations should be easily changed to work with dask instead.

b-reyes commented 2 years ago

In PR #774 we extended the open_raw function to include a kwarg that allows users to directly write variables with a large memory footprint to a temporary zarr store for EK60/EK80 echosounders. In this PR we also successfully opened and wrote to a zarr file both of the files presented in this issue. Currently, this functionality is in beta as it requires unit testing and we would like to add an option that automatically determines if one should write to a temporary zarr store (see #782). However, PR #774 addresses the core problem highlighted in this issue. I believe we can close this issue.

@oftfrfbf thank you very much for your input on this issue and for providing us with an excellent test file!

leewujung commented 2 years ago

@b-reyes : I agree that we can close this issue now! I believe @imranmaj's changes addressed the AD2CP side of things, and irregular data like this is much less likely to happen for AZFP.