k2-fsa / icefall

https://k2-fsa.github.io/icefall/
Apache License 2.0
792 stars 267 forks source link

Is it possible to do reverberation on the fly? #1594

Open littlecuoge opened 1 month ago

littlecuoge commented 1 month ago

Hi K2 team, I know we can do reverberation in data preparation phase. But is it possible to do that on the fly during training, so that the data augmentation strategy would be various for different epochs?

csukuangfj commented 1 month ago

Please see https://github.com/k2-fsa/icefall/blob/ed6bc200e37aaea0129ae32095642c096d4ffad5/egs/yesno/ASR/tdnn/asr_datamodule.py#L170-L187

You need to

  1. https://github.com/k2-fsa/icefall/blob/ed6bc200e37aaea0129ae32095642c096d4ffad5/egs/yesno/ASR/tdnn/asr_datamodule.py#L114

Pass --on-the-fly-feats=true to train.py

  1. Uncomment https://github.com/k2-fsa/icefall/blob/ed6bc200e37aaea0129ae32095642c096d4ffad5/egs/yesno/ASR/tdnn/asr_datamodule.py#L179
JinZr commented 1 month ago

yes it’s doable, let me check the doc and return to you later. bestjinOn 15 Apr 2024, at 11:31, Shenquan Zhang @.***> wrote: Hi K2 team, I know we can do reverberation in data preparation phase. But is it possible to do that on the fly during training, so that the data augmentation strategy would be various for different epochs?

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: @.***>

littlecuoge commented 1 month ago

Ah, I meant the reverberation with impulse response, not the speed perturb. Thank you @JinZr , please share your doc.

littlecuoge commented 1 month ago

I tried to add rir into the first place in transforms, like this:

transforms.append(
                ReverbWithImpulseResponse(p=0.5)
            )

But got an error: -- Process 3 terminated with the following error: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py", line 69, in _wrap fn(i, *args) File "/icefall/egs/easy_start/ASR/zipformer/train.py", line 1265, in run train_one_epoch( File "/icefall/egs/easy_start/ASR/zipformer/train.py", line 941, in train_one_epoch for batch_idx, batch in enumerate(train_dl): File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 442, in __iter__ return self._get_iterator() File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 388, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 1043, in __init__ w.start() File "/usr/lib/python3.10/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/usr/lib/python3.10/multiprocessing/context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/usr/lib/python3.10/multiprocessing/context.py", line 288, in _Popen return Popen(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/usr/lib/python3.10/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle 'module' object Not sure why?

pzelasko commented 1 month ago

Looks like a bug in Lhotse, will fix. You can probably solve this by setting env var LHOTSE_DILL_ENABLED=1 or using the cuts = cuts.reverb_rir() API.

littlecuoge commented 1 month ago

@pzelasko Thanks for your replying. I tried LHOTSE_DILL_ENABLED=1. Looks like it works, but it takes about 10min to train 50 batches. While I added RIR and MUSAN noise at the same time, but it still takes too much time. What do you think?

pzelasko commented 1 month ago

Was it faster without RIR or MUSAN? What’s the number of data loading workers and max duration?