idiap / w2v2-air-traffic

MIT License
35 stars 6 forks source link

GPU out of memory issue always at same point #4

Closed damnfarooq closed 1 year ago

damnfarooq commented 1 year ago

it give me this error at 500/10000 always, is it an issue with the values generated during code or my system, I tried even batch size=1

(Farooq_thesis) phd-research@phd-research:~/research_space/w2v2-air-traffic$ bash ablations/uwb_atcc/train_w2v2_large-60v.sh

return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 584.00 MiB (GPU 0; 7.80 GiB total capacity; 6.13 GiB already allocated; 64.19 MiB free; 6.22 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 5%|███▌ | 500/10000 [02:24<45:46, 3.46it/s] Done training facebook/wav2vec2-large-960h-lv60-self model for UWB-ATCC in: experiments/results/baselines

JuanPZuluaga commented 1 year ago

Hello, I think the model you're using is too large. You need to try a smaller model or reduce the max time length of your utterances or get a GPU with larger memory.

Pablo