facebookresearch / denoiser

Real Time Speech Enhancement in the Waveform Domain (Interspeech 2020)We provide a PyTorch implementation of the paper Real Time Speech Enhancement in the Waveform Domain. In which, we present a causal speech enhancement model working on the raw waveform that runs in real-time on a laptop CPU. The proposed model is based on an encoder-decoder architecture with skip-connections. It is optimized on both time and frequency domains, using multiple loss functions. Empirical evidence shows that it is capable of removing various kinds of background noise including stationary and non-stationary noises, as well as room reverb. Additionally, we suggest a set of data augmentation techniques applied directly on the raw waveform which further improve model performance and its generalization abilities.
Other
1.62k stars 299 forks source link

Very long training time for DNS 2020 #126

Open jhkonan opened 2 years ago

jhkonan commented 2 years ago

We are trying to reproduce the dns64 Demucs model result from scratch. We have two 2080 Ti GPUs, but the largest batch size we are able to make is 14 and the model takes 5 hours per epoch.

Is it supposed to take this long? We are trying to follow the instructions for the dns dataset:

https://github.com/facebookresearch/denoiser#dns-dataset

What changes need to be made to make the model training time reasonable?

#!/bin/bash
# Copyright (c) Facebook, Inc. and its affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
# authors: adiyoss and adefossez

python train.py \
  dset=dns \
  demucs.causal=1 \
  demucs.hidden=64 \
  demucs.resample=4 \
  batch_size=128 \
  revecho=1 \
  segment=10 \
  stride=2 \
  shift=16000 \
  shift_same=True \
  epochs=250 \
  ddp=1 $@

Source: https://github.com/facebookresearch/denoiser/blob/main/launch_dns.sh

adefossez commented 2 years ago

Long training is expected on dns, it is a very big dataset. I believe we trained on 4 or 8 V100 and it still took a few days. Changes could be to reduce the hidden from 64 to 48. You can reduce the epoch length by increasing shift, for instance shift=32000, but this wont make the model converge faster.