Open shihongji1993 opened 1 year ago
It regularly happens when your gpu memory is full of data. decrease your batch size into you gpu memory space . for example batch sizes with 32 usually allocates about 7 to 8 gig of memory. in order to find out gpu memory size which is allocated during processing, try this command :
watch nvidia-smi
It regularly happens when your gpu memory is full of data. decrease your batch size into you gpu memory space . for example batch sizes with 32 usually allocates about 7 to 8 gig of memory. in order to find out gpu memory size which is allocated during processing, try this command :
watch nvidia-smi
I have set the batch_size=32,but it also have the problem; when I set the workers_per_gpu=0 or 1,the problem can be solved,but the train speed also will be slower,
What is your gpu model ? Would you check the number of cuda cores ?
What is your gpu model ? Would you check the number of cuda cores ?
my GPU is 2080