..when the current job needs more space than you have.
This fixes #40.
The way it works is by having each datasource generate a list of its cached DEMs, we then subtract from this set the set required by the current job. What remains is a list of DEMs that we could remove if we need the space. We save this list in shared memory and in a synchronized manner, we delete these files when the unpack operation fails.
We still need to catch just the exception thrown by unpacking to a full disk though. I'm still testing this so its not quite ready yet. Just wanted to show the diff for easier debugging.
..when the current job needs more space than you have.
This fixes #40.
The way it works is by having each datasource generate a list of its cached DEMs, we then subtract from this set the set required by the current job. What remains is a list of DEMs that we could remove if we need the space. We save this list in shared memory and in a synchronized manner, we delete these files when the
unpack
operation fails.We still need to catch just the exception thrown by unpacking to a full disk though. I'm still testing this so its not quite ready yet. Just wanted to show the diff for easier debugging.