modelscope / FunASR

A Fundamental End-to-End Speech Recognition Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Recognition, Voice Activity Detection, Text Post-processing etc.
https://www.funasr.com
Other
6.95k stars 738 forks source link

按照seacoparaformer中finetune.sh微调时报错,是不是需要修改["source", "target"]呢?改成热词版的数据格式呢? #2165

Closed YouTwoMeToo closed 3 weeks ago

YouTwoMeToo commented 3 weeks ago

data dir, which contains: train.json, val.json

data_dir="../../../data/list"

train_data="${data_dir}/train.jsonl" val_data="${data_dir}/val.jsonl"

generate train.jsonl and val.jsonl from wav.scp and text.txt

scp2jsonl \ ++scp_file_list='["../../../data/list/train_wav.scp", "../../../data/list/train_text.txt"]' \ ++data_type_list='["source", "target"]' \ ++jsonl_file_out="${train_data}"

scp2jsonl \ ++scp_file_list='["../../../data/list/val_wav.scp", "../../../data/list/val_text.txt"]' \ ++data_type_list='["source", "target"]' \ ++jsonl_file_out="${val_data}" 报错信息如下 rank0: Traceback (most recent call last): rank0: File "/wind/aispace/train/source/src/FunASR/examples/industrial_data_pretraining/paraformer/../../../funasr/bin/train_ds.py", line 222, in

rank0: File "/usr/local/lib/python3.10/dist-packages/hydra/main.py", line 94, in decorated_main

rank0: File "/usr/local/lib/python3.10/dist-packages/hydra/_internal/utils.py", line 394, in _run_hydra

rank0: File "/usr/local/lib/python3.10/dist-packages/hydra/_internal/utils.py", line 457, in _run_app

rank0: File "/usr/local/lib/python3.10/dist-packages/hydra/_internal/utils.py", line 223, in run_and_report rank0: raise ex rank0: File "/usr/local/lib/python3.10/dist-packages/hydra/_internal/utils.py", line 220, in run_and_report rank0: return func() rank0: File "/usr/local/lib/python3.10/dist-packages/hydra/_internal/utils.py", line 458, in rank0: lambda: hydra.run( rank0: File "/usr/local/lib/python3.10/dist-packages/hydra/internal/hydra.py", line 132, in run rank0: = ret.return_value rank0: File "/usr/local/lib/python3.10/dist-packages/hydra/core/utils.py", line 260, in return_value rank0: raise self._return_value rank0: File "/usr/local/lib/python3.10/dist-packages/hydra/core/utils.py", line 186, in run_job rank0: ret.return_value = task_function(task_cfg) rank0: File "/wind/aispace/train/source/src/FunASR/examples/industrial_data_pretraining/paraformer/../../../funasr/bin/train_ds.py", line 56, in main_hydra

rank0: File "/wind/aispace/train/source/src/FunASR/examples/industrial_data_pretraining/paraformer/../../../funasr/bin/train_ds.py", line 170, in main

rank0: File "/wind/aispace/train/source/src/FunASR/funasr/train_utils/trainer_ds.py", line 578, in train_epoch rank0: for batch_idx, batch in enumerate(dataloader_train): rank0: File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 701, in next rank0: data = self._next_data() rank0: File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 1465, in _next_data rank0: return self._process_data(data) rank0: File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 1491, in _process_data

rank0: File "/usr/local/lib/python3.10/dist-packages/torch/_utils.py", line 715, in reraise rank0: raise exception rank0: AssertionError: Caught AssertionError in DataLoader worker process 0. rank0: Original Traceback (most recent call last): rank0: File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/worker.py", line 351, in _worker_loop rank0: data = fetcher.fetch(index) # type: ignorepossibly-undefined: File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch rank0: data = self.dataset[idx] for idx in possibly_batched_index: File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py", line 52, in rank0: data = self.dataset[idx] for idx in possibly_batched_index: File "/wind/aispace/train/source/src/FunASR/funasr/datasets/audio_datasets/datasets.py", line 75, in getitem rank0: speech, speech_lengths = extract_fbank( rank0: File "/wind/aispace/train/source/src/FunASR/funasr/utils/load_utils.py", line 173, in extract_fbank rank0: data, data_len = frontend(data, data_len, kwargs) rank0: File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl rank0: return self._call_impl(*args, *kwargs) rank0: File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl rank0: return forward_call(args, kwargs) rank0: File "/wind/aispace/train/source/src/FunASR/funasr/frontends/wav_frontend.py", line 134, in forward rank0: mat = kaldi.fbank( rank0: File "/usr/local/lib/python3.10/dist-packages/torchaudio/compliance/kaldi.py", line 591, in fbank rank0: waveform, window_shift, window_size, padded_window_size = _get_waveform_and_window_properties( rank0: File "/usr/local/lib/python3.10/dist-packages/torchaudio/compliance/kaldi.py", line 142, in _get_waveform_and_window_properties rank0: assert 2 <= window_size <= len(waveform), "choose a window size {} that is [2, {}]".format( rank0: AssertionError: choose a window size 0 that is [2, 0] [2024-10-22 08:29:59,396][root][INFO] - rank: 3, dataloader start from step: 0, batch_num: 5735, after: 5735 rank0:[W1022 08:30:00.602830270 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator()) W1022 08:30:01.507000 1550 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 1592 closing signal SIGTERM W1022 08:30:01.508000 1550 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 1593 closing signal SIGTERM W1022 08:30:01.508000 1550 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 1594 closing signal SIGTERM E1022 08:30:01.924000 1550 torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 1591) of binary: /usr/bin/python Traceback (most recent call last): File "/usr/local/bin/torchrun", line 8, in sys.exit(main()) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 355, in wrapper return f(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 919, in main run(args) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 910, in run elastic_launch( File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 138, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 269, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

LauraGPT commented 3 weeks ago

数据中有非常短的音频