AllenCellModeling / pytorch_fnet

Three dimensional cross-modal image inference
Other
151 stars 61 forks source link

GPU memory allocation #153

Open siewerthug opened 4 years ago

siewerthug commented 4 years ago

Hi

Whenever I try to run the following code after installing: python predict.py

Then I get the following error: RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 10.73 GiB total capacity; 7.74 GiB already allocated; 77.06 MiB free; 1.89 GiB cached)

I have an GeForce RTX 2080 Ti 11 GB Founders Edition Video Card so it should have plenty of memory free. All GPU memory should be free (nothing else is running, verified by nvidia-smi.

It used to do the same thing with download_and_train.py, but then I just changed batch size to 16 and it started working. Of course, this is just a run around and is not a fix for the prediction code.

I would appreciate any help with this! Siewert

gregjohnso commented 4 years ago

Hi Siewert, it turns out many of our users are reporting this issue with 11GB cards. Currently the only work-around is to reduce the batch size. I've updated the README to accurately reflect this requirement.

Greg

gregjohnso commented 4 years ago

@siewerthug were you able to find a work around for this?

siewerthug commented 4 years ago

@gregjohnso I haven't looked into it further. In which file do I change the batchsize for the prediction data?

calystay commented 4 years ago

I have also experienced this issue. To my best knowledge, I have request 60GB of memory on the GPU. Instead of running predict.py, I was using the command line control to train (fnet train --json /where-the-model-is), instead of predict.

  File "/allen/aics/apps/hpc_shared/mod/anaconda3-5.3.0/envs/label_free_cy/bin/fnet", line 11, in <module>
    load_entry_point('fnet', 'console_scripts', 'fnet')()
  File "/home/calystay/pytorch_fnet/fnet/cli/main.py", line 41, in main
    func(args)
  File "/home/calystay/pytorch_fnet/fnet/cli/train_model.py", line 132, in main
    *bpds_train.get_batch(args.batch_size)
  File "/home/calystay/pytorch_fnet/fnet/fnet_model.py", line 242, in train_on_batch
    y_hat_batch = module(x_batch)
  File "/allen/aics/apps/hpc_shared/mod/anaconda3-5.3.0/envs/label_free_cy/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/calystay/pytorch_fnet/fnet/nn_modules/fnet_nn_3d_params.py", line 29, in forward
    x_rec = self.net_recurse(x)
  File "/allen/aics/apps/hpc_shared/mod/anaconda3-5.3.0/envs/label_free_cy/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/calystay/pytorch_fnet/fnet/nn_modules/fnet_nn_3d_params.py", line 80, in forward
    x_bn1 = self.bn1(x_convt)
  File "/allen/aics/apps/hpc_shared/mod/anaconda3-5.3.0/envs/label_free_cy/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/allen/aics/apps/hpc_shared/mod/anaconda3-5.3.0/envs/label_free_cy/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 81, in forward
    exponential_average_factor, self.eps)
  File "/allen/aics/apps/hpc_shared/mod/anaconda3-5.3.0/envs/label_free_cy/lib/python3.6/site-packages/torch/nn/functional.py", line 1656, in batch_norm
    training, momentum, eps, torch.backends.cudnn.enabled
RuntimeError: CUDA out of memory. Tried to allocate 448.00 MiB (GPU 0; 31.72 GiB total capacity; 5.71 GiB already allocated; 35.56 MiB free; 321.47 MiB cached)
dsethz commented 4 years ago

Description

I have a very similar issue: I observe a disproportional increase in occupied memory, changing the batch size. I have to go down to very small batch sizes (bs) to fit the model onto the GPU during training. Here are examples of the occupied memory returned by nvidia-smi given the respective bs:

batch_size = 2 --> 13.5 GB
batch_size = 4 -->  22.6 GB
batch_size = 8 --> out-of-memory

. I trained on single image .tiff files that range between 2-4 MB in size. The training set consists of 130 (signal and target) images and the validation set consists of 60 images. Given that the provided demo uses a larger bs and larger samples, I assume this behaviour is not expected. Of note, I trained with the fnet.nn_modules.fnet_nn_2d.Net class. Below you can see the preferences submitted via .json file:

{
batch_size: 4,
bpds_kwargs: {buffer_size: 16, buffer_switch_interval: 2800, patch_shape: [1, 1024, 1024]},
dataset_train: fnet.data.TiffDataset,
dataset_train_kwargs: {path_csv: /some_path/test.csv},
dataset_val: fnet.data.TiffDataset,
dataset_val_kwargs: {path_csv: /some_path2/test.csv},
fnet_model_class: fnet.fnet_model.Model,
fnet_model_kwargs: {betas: [0.9, 0.999], criterion_class: fnet.losses.WeightedMSE, init_weights: false, lr: 0.001, nn_class: fnet.nn_modules.fnet_nn_2d.Net, scheduler: null},
interval_checkpoint: 10000,
interval_save: 1000,
iter_checkpoint: [],
n_iter: 50000,
path_save_dir: /some_path3/model,
seed: null
}

Environment

- python 3.7.6
- aicsimageio 3.0.7
- pytorch 1.4.0
- cuda 10.1
- Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-65-generic x86_64)
- NVIDIA Titan RTX

Thank you, Daniel