ymli39 / DeepSEED-3D-ConvNets-for-Pulmonary-Nodule-Detection

DeepSEED: 3D Squeeze-and-Excitation Encoder-Decoder ConvNets for Pulmonary Nodule Detection
MIT License
109 stars 33 forks source link

Cuda Out of Memory #19

Closed ghost closed 3 years ago

ghost commented 4 years ago

Hello, I would like to first thank you for this nice project. I have some questions about your model:

  1. Can we test your model (just detection) with CPU?
  2. I have a 24GB GPU, but I still have an error when I want to test your model (Cuda out of memory). Is there any way to make the testing part as the training part with 128128128 patches instead of 208208208?
  3. Is there any alternative solution to solve this problem? Thanks for your help
ymli39 commented 4 years ago

Hi,

  1. I did not test my model on CPU but I think you could do it by setting CUDA to false.
  2. a 24 GB GPU is large enough to run the model. My model ran on a 12GB memory GPU with batch size of 8. It should not be an issue? If you want to change the testing patch, I suggest set the testing batch to a smaller value like 2. Overlap between each patch during testing is necessary to get better results.
ymli39 commented 3 years ago

You could try batch size to 2 if it keeps giving you out of memeory error.

ghost commented 3 years ago

It was not about batch size. at least in my system, when I use your comment for testing: 'CUDA_VISIBLE_DEVICES=0,1 python train_detector_se.py -b 1 --resume ‘best_model.ckpt’ --test 1 --save-dir /output/' it gives the memory error I mentioned. I changed this part of your code and it worked. parser.add_argument('--gpu', default='0,1', type=str, metavar='N', help='use gpu')