Closed atisman89 closed 1 year ago
Resolved after changing the default batch size to 1 in /home/VTUNet/vtunet/run/default_configuration.py
elif task == 'Task003_tumor':
print("Task Tumor here we go !!!")
plans['plans_per_stage'][0]['batch_size'] = 1
Original value was 4. Changing it to 2 didn't work either for 16GB GPU memory...
Hi, I'm now getting CUDA OOM error while training with
small
configuration: I saw other similar issues but not sure how this could be resolved in my situation. I'm using AWS EC2p3.2xlarge
instance (61GB RAM, 16GB GPU memory). Is "tiny" configuration still available? Then how can I use that configuration? Thanks.