Audio-WestlakeU / FullSubNet

PyTorch implementation of "FullSubNet: A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech Enhancement."
https://fullsubnet.readthedocs.io/en/latest/
MIT License
553 stars 157 forks source link

Batch size and GPU out of memory #37

Open danielemirabilii opened 2 years ago

danielemirabilii commented 2 years ago

Hi, I have been trying to train the FullSubNet model for a while using the code in this repo. What I experienced is that I must use a batch size of maximum 12, resulting in a very slow and inefficient training (the loss decreases quite slowly). If I try with a larger batch size, I get a GPU out-of-memory message.

I have two Nvidia RTX 2080 Ti with 11 GB each. I see from train.toml that the default batch size is 48, any suggestion on that?