maximlamare / S3_extract

Extract the outputs from the S3 OLCI processor for a given number of S3 files at given coordinates.
MIT License
1 stars 2 forks source link

Hard drive partition running out of space #9

Open widaro opened 4 years ago

widaro commented 4 years ago

Problem discovered using an ESA virtual machine (Ubuntu 18.02.2 LTS 64-bit).

When using 's3_extract_snow_product.py' to process multiple '.SEN3' files the products created from snappy are not garbage-collected and deleted when the dispose() method is used on the objects. Even if a restart of the program is done often (like in the memory cache issue I also posted) the partition for the temporary files piles up as '.SEN3' files are processed. The clean-up is first done when the partition is full and then the program terminates. This is apparently because GPF writes the products to the hard drive and they are not deleted due to a bug in snappy (as described here: https://forum.step.esa.int/t/make-snappy-wait-for-java-to-finish/1366/4)

The problem can be solved by calling dispose() on all of the objects created with snappy and then empty the cache where they are stored. The cache is emptied like this:

The issue is also handled in this thread: https://forum.step.esa.int/t/handling-sentinel-data-in-python-working-with-snappy/2222/41