ASFOpenSARlab / opensarlab_MintPy_Recipe_Book

A Jupyter-Book containing data recipes for creating HyP3 Sentinel-1 IINSAR_GAMMA and INSAR_ISCE_BURST Small Baseline Subset (SBAS) Time Series Analyses with MintPy
BSD 3-Clause "New" or "Revised" License
14 stars 5 forks source link

large ifgramstack crashes the 'Output Results to GeoTiff' workbook #11

Closed iykman360 closed 1 month ago

iykman360 commented 2 months ago

Notebook name Which notebook contains the bug? Output Results to GeoTiff

Describe the bug A clear and concise description of what the bug is. I created a large volume of interferograms and my ifgramstack is around 33GB. However, when i try to run this line ( _, unw_info = mintpy.utils.readfile.read(ifgramstack)) of section 5 'Save the unwrapped Displacement Geo Tiffs', it crashes the system.

To Reproduce Steps to reproduce the behavior:

  1. Go to '...'Output Results to GeoTiff' workbook
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior A clear and concise description of what you expected to happen. I expect this section to run seamlessly.

Screenshots If applicable, add screenshots to help explain your problem.

Are you running locally or in OpenSARLab? I am runnig in OpenSARLab

System Info (If not working in OpenSARLab):

Data Details of your InSAR SBAS stack, if applicable to your problem:

Additional context Add any other context about the problem here.

Alex-Lewandowski commented 2 months ago

@iykman360 Would you provide some more detail in how it crashes? Does the notebook kernel die? Does your OpenSARLab server stop running? Do you receive an error when trying to run that line? Do you have enough storage space remaining to hold the output GeoTiffs?

iykman360 commented 2 months ago

Hi Alex, Thanks for your response. My Disk space used is: 20.29%. I think I have enough space remaining to hold the output. I have attached the error message error I think my server stops running because the ifgramstack is too large. Is there a way to read the ifgramstack in batches? Maybe that could work.

Alex-Lewandowski commented 1 month ago

Hi @iykman360, Typically, if the stack is too large and you run out of memory, it results in a crashed notebook kernel but the server continues to run. The server can sometimes crash unexpectedly when using a shared server on a JupyterHub (like OpenSARLab). Both the SAR 1 and SAR 2 servers profiles are shared. The m6a.xlarge - Single User and m6a.large - Single User servers are not. I recommend trying one of those.

iykman360 commented 1 month ago

Thank you so much, Alex. This worked !!!