Closed malonzo47 closed 4 years ago
Related to big data. Not sure how to fix at the moment.
So how do you recommend we proceed? Seems like there should be a line of code that is making this go bad. The other metric images from Step 4 seem to work ok... Isn't this just standard debugging? Or is there some fundamental issue with BigTiff that cannot be resolved?
Actually, whenever I use a big dataset, I get an error saying can't allocate memory
so it's difficult to check the reason. Please let me know how you don't have such an error.
I've never gotten that error. Do you only get it for Custom Date or is it before you even get to that step?
On Wed, Apr 29, 2020, 8:16 PM FreelanceDev notifications@github.com wrote:
Actually, whenever I use a big dataset, I get an error saying can't allocate memory so it's difficult to check the reason. Please let me know how you don't have such an error.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/malonzo47/pynita_GUI/issues/78#issuecomment-621537187, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADUHL7GFEV7ZYPQZL4BXMOTRPC7PVANCNFSM4MLNH4NQ .
I get that error on Step3 creating metrics images.
Interesting. I've never seen that. Is there no way to dig into the code to try to understand what's happening? This is a critical issue since if it happens to others it means the program simply doesn't work for normal-sized datasets.
I can't replicate the error with the demo dataset and get memory error with the big dataset. About the memory error, I don't have a way to handle that. Please provide me a dataset and workflow to replicate this error.
I'm not sure what to tell you... my workflow is in Step 4 but you can't get to Step 4. Seems like the error with memory allocation needs to be fixed first. If you can't solve it but can more fully document it (including what kind of expertise one might need to fix it), I can try to pass along to someone else.
The memory issue is related to the size of RAM of the PC. So machine specification problem. It is trying to allocate more space than the size of my RAM.
I am processing this dataset with total RAM usage of 8 GB most of which is not even for this process. This doesn't seem like a machine issue to me, necessarily.
Please check the latest version on the master branch. I just checked again after increasing the pagefile size of my Windows and got the images below.
Wait, is this on a subset that you drew in Step 3? This does not appear to be the full image. We need to make sure this works on the full image.
Also, I'm not sure how to test this. I only run code via the latest binary release (e.g., v.1.0.3) or through Docker. I am not able to successfully run from source without Docker. That install workflow does not work for me. Please advise.
Checking again with the full image, I see the exception occurred on the line #135 of nita_funcs.py.
divide by zero encountered in true_divide
The problem is that I am not clear how to handle this exception.
What is the function supposed to return if mov_std is zero?
Are you sure this is actually leading to a fatal error? I see this warning all the time in the CMD window and it never halts processing. That said, I will look into this function to see how to handle the exception.
Closing this because: 1) I am not currently able to replicate so, at the very least, this may not be the issue, per se. 2) A more important underlying issue relates to installation of the correct GDAL that supports BigTiff.
This might be an issue that we're only seeing with larger datasets. Here is the link to the input data to use: https://drive.google.com/drive/folders/1lBIRCwAvGjLA8GTFF9M_ImwikMK-KzwU?usp=sharing image stack = zinda_nbr_stack2.tif stack dates = zinda_stack_dates2.csv user_config = user_configs_JJAS.ini
Workflow: Get to Step 4. Display the "Value Change Custom" from 2010000 to 9999 (last available image date). Crash happens then.
I'm using Windows release 1.0.3.