b04901014 / FT-w2v2-ser

Official implementation for the paper Exploring Wav2vec 2.0 fine-tuning for improved speech emotion recognition
MIT License
136 stars 32 forks source link

run_downstream_custom_multiple_fold.py CUDA out of memory #7

Open zxpan opened 2 years ago

zxpan commented 2 years ago

Got the following when running run_downstream_custom_multiple_fold.py RuntimeError: CUDA out of memory. Tried to allocate 730.00 MiB (GPU 0; 23.70 GiB total capacity; 21.65 GiB already allocated; 426.81 MiB free; 21.81 GiB reserved in total by PyTorch)

I have NVIDIA GeForce RTX 3090 with 24GB.

Any insights on how to workaround it?

JYeonKim commented 1 year ago

me tooo...... I think we have to use multi-gpu

liuhaozhe6788 commented 1 year ago

@zxpan You can reduce the batch size from 64 to 32.