ANTsX / ANTsPyNet

Pre-trained models and utilities for deep learning on medical images in Python
https://antspynet.readthedocs.io
Apache License 2.0
182 stars 28 forks source link

Tensorflow warning (deep_flash utilities) #87

Closed aa2782 closed 6 months ago

aa2782 commented 10 months ago

Hi, I am trying to automate segmentation analysis for my 165 T1w MRI images using the 'deep_flash' utility within the for loop. I have set the 'do_preprocessing' dependency to 'True'. However, I got this error: error1 I was wondering if there is a way to get around this Tensorflow warning.

ntustison commented 10 months ago

It's just a warning. I ignore it.

aa2782 commented 10 months ago

I ran this automated for_loop on 10 images initially like this: err

But the result was an empty data frame with tensorflow warning. I think the 'deep_flash' code is downloading the dataset each time it loops through and my GPU (16GB) and system RAM(13 GB) isn't enough to store all the parameters.
I was wondering if there's a way for the 'deep_flash' utility not to download the data each time it loops through.

ntustison commented 10 months ago

But the result was an empty data frame with tensorflow warning.

These two things aren't related.

I was wondering if there's a way for the 'deep_flash' utility not to download the data each time it loops through.

By design, ANTsXNet stores all external data in ~/.keras/ANTsXNet/ (see the antsxnet_cache_directory option). If the external data already exists in the cache directory, it won't download it again. If your program is repeatedly downloading the data, then there is a problem with your individual system.

aa2782 commented 10 months ago

I see. By individual system, do you mean my hardware or the code that I wrote? I am running this on google colab, if that helps.

ntustison commented 10 months ago

My knowledge of google colab is limited but perhaps something about that platform is necessitating repeated downloads. You have the option of explicitly specifying the antsxnet_cache_directory if you have a more permanent disk storage option.

You should also be aware that you don't need to have a GPU to run these ANTsPyNet programs. In fact, although I do all the training using a GPU, whenever I'm running a study, similar to what you're doing, I use my desktop or run it on the university cluster without the GPU.

aa2782 commented 10 months ago

Thank you for your insight. I was able to fix the problem once I realized that I was looping through raw files with different dimensions. When I ran the loop with files of the same dimensions, the code seemed to work.

ntustison commented 10 months ago

I was able to fix the problem once I realized that I was looping through raw files with different dimensions. When I ran the loop with files of the same dimensions, the code seemed to work.

This shouldn't matter. Images of different sizes are handled internally within deep_flash. However, if your images are not in canonical orientation, then that would be one explanation of it not working.

cookpa commented 10 months ago

Agreed, with do_preprocessing = True, all the images should be the same size after pre-processing.

The error might result from registration failing completely - if the type of the input changes (eg, to None), it could make the function retracing happen. That would also explain why no output gets produced.

The images of different dimensions might have other problems such as not being in the correct canonical orientation. You can try opening them in ITK-SNAP, if the anatomical labels (L, R, A, P, I, S) do not agree with the anatomy, then it could be the problem.

aa2782 commented 10 months ago

Thank you so much guys. I really appreciate it!! I'll implement your suggestions and let you know how it goes.