Closed aa2782 closed 6 months ago
It's just a warning. I ignore it.
I ran this automated for_loop on 10 images initially like this:
But the result was an empty data frame with tensorflow warning. I think the 'deep_flash' code is downloading the dataset each time it loops through and my GPU (16GB) and system RAM(13 GB) isn't enough to store all the parameters.
I was wondering if there's a way for the 'deep_flash' utility not to download the data each time it loops through.
But the result was an empty data frame with tensorflow warning.
These two things aren't related.
I was wondering if there's a way for the 'deep_flash' utility not to download the data each time it loops through.
By design, ANTsXNet stores all external data in ~/.keras/ANTsXNet/ (see the antsxnet_cache_directory
option). If the external data already exists in the cache directory, it won't download it again. If your program is repeatedly downloading the data, then there is a problem with your individual system.
I see. By individual system, do you mean my hardware or the code that I wrote? I am running this on google colab, if that helps.
My knowledge of google colab is limited but perhaps something about that platform is necessitating repeated downloads. You have the option of explicitly specifying the antsxnet_cache_directory if you have a more permanent disk storage option.
You should also be aware that you don't need to have a GPU to run these ANTsPyNet programs. In fact, although I do all the training using a GPU, whenever I'm running a study, similar to what you're doing, I use my desktop or run it on the university cluster without the GPU.
Thank you for your insight. I was able to fix the problem once I realized that I was looping through raw files with different dimensions. When I ran the loop with files of the same dimensions, the code seemed to work.
I was able to fix the problem once I realized that I was looping through raw files with different dimensions. When I ran the loop with files of the same dimensions, the code seemed to work.
This shouldn't matter. Images of different sizes are handled internally within deep_flash. However, if your images are not in canonical orientation, then that would be one explanation of it not working.
Agreed, with do_preprocessing = True, all the images should be the same size after pre-processing.
The error might result from registration failing completely - if the type of the input changes (eg, to None), it could make the function retracing happen. That would also explain why no output gets produced.
The images of different dimensions might have other problems such as not being in the correct canonical orientation. You can try opening them in ITK-SNAP, if the anatomical labels (L, R, A, P, I, S) do not agree with the anatomy, then it could be the problem.
Thank you so much guys. I really appreciate it!! I'll implement your suggestions and let you know how it goes.
Hi, I am trying to automate segmentation analysis for my 165 T1w MRI images using the 'deep_flash' utility within the for loop. I have set the 'do_preprocessing' dependency to 'True'. However, I got this error:
I was wondering if there is a way to get around this Tensorflow warning.