Closed hbwu-ntu closed 3 months ago
Thank you for recognizing our work!
During the training process, we actually employed a multi-GPU strategy.
Personally, I used two NVIDIA 3090ti GPUs to train the MP-SENet like
CUDA_VISIBLE_DEVICES=0,1 python train.py --config config.json
The batch size was set to 4, resulting in a batch size of 2 for each GPU. The training process is expected to take approximately 3 to 4 days.
Hi, your paper and code are excellent! I have learned a lot about speech enhancement from the paper, and I find your code to be very well-structured and clear. Thank you so much!
I have some questions:
Thanks in advance.