yeyupiaoling / PPASR

基于PaddlePaddle实现端到端中文语音识别,从入门到实战,超简单的入门案例,超实用的企业项目。支持当前最流行的DeepSpeech2、Conformer、Squeezeformer模型
Apache License 2.0
807 stars 128 forks source link

big模型有个报错 #93

Closed wangcc57 closed 2 years ago

wangcc57 commented 2 years ago

big模型使用工程里自带的wav没问题,换了个我自己的wav报错,是需要更新最新代码吗?

Traceback (most recent call last):
  File "infer_path.py", line 97, in <module>
    predict_audio()
  File "infer_path.py", line 61, in predict_audio
    score, text = predictor.predict(audio_path=args.wav_path, to_an=args.to_an)
  File "/data/PPASR/ppasr/predict.py", line 184, in predict
    self.predictor.run()
OSError: In user code:

    File "export_model.py", line 24, in <module>
      trainer.export(save_model_path=args.save_model, resume_model=args.resume_model)
    File "/data/PPASR/ppasr/trainer.py", line 533, in export
      paddle.jit.save(layer=model, path=infer_model_path, input_spec=input_spec)
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/jit.py", line 629, in wrapper
      func(layer, path, input_spec, **configs)
    File "/usr/local/python3/lib/python3.8/site-packages/decorator.py", line 232, in fun
      return caller(func, *(extras + args), **kw)
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
      return wrapped_func(*args, **kwargs)
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/base.py", line 51, in __impl__
      return func(*args, **kwargs)
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/jit.py", line 867, in save
      concrete_program = static_forward.concrete_program_specify_input_spec(
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 527, in concrete_program_specify_input_spec
      concrete_program, _ = self.get_concrete_program(
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 436, in get_concrete_program
      concrete_program, partial_program_layer = self._program_cache[cache_key]
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 801, in __getitem__
      self._caches[item_id] = self._build_once(item)
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 785, in _build_once
      concrete_program = ConcreteProgram.from_func_spec(
    File "/usr/local/python3/lib/python3.8/site-packages/decorator.py", line 232, in fun
      return caller(func, *(extras + args), **kw)
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
      return wrapped_func(*args, **kwargs)
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/base.py", line 51, in __impl__
      return func(*args, **kwargs)
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 733, in from_func_spec
      outputs = static_func(*inputs)
    File "/data/PPASR/ppasr/model_utils/utils.py", line 59, in forward
      logits, _, final_chunk_state_h_box, final_chunk_state_c_box = self.model(x, audio_len, init_state_h_box, init_state_c_box)
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/layers.py", line 930, in __call__
      return self._dygraph_call_func(*inputs, **kwargs)
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/layers.py", line 915, in _dygraph_call_func
      outputs = self.forward(*inputs, **kwargs)
    File "/data/PPASR/ppasr/model_utils/deepspeech2/model.py", line 53, in forward
      x, final_chunk_state_h_box, final_chunk_state_c_box = self.rnn(x, x_lens, init_state_h_box, init_state_c_box)
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/layers.py", line 930, in __call__
      return self._dygraph_call_func(*inputs, **kwargs)
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/layers.py", line 915, in _dygraph_call_func
      outputs = self.forward(*inputs, **kwargs)
    File "/tmp/tmp172gmmjh.py", line 60, in forward
      [x_lens, init_state_list, i, x] = paddle.jit.dy2static.convert_while_loop(
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 45, in convert_while_loop
      loop_vars = _run_py_while(cond, body, loop_vars)
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 59, in _run_py_while
      loop_vars = body(*loop_vars)
    File "/data/PPASR/ppasr/model_utils/deepspeech2/rnn.py", line 65, in forward
      x, final_state = self.rnn[i](x, x_lens, init_state_list[i])
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/layers.py", line 930, in __call__
      return self._dygraph_call_func(*inputs, **kwargs)
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/layers.py", line 915, in _dygraph_call_func
      outputs = self.forward(*inputs, **kwargs)
    File "/data/PPASR/ppasr/model_utils/deepspeech2/rnn.py", line 21, in forward
      x, final_state = self.rnn(x, init_state, x_lens)  # [B, T, D]
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/layers.py", line 930, in __call__
      return self._dygraph_call_func(*inputs, **kwargs)
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/layers.py", line 915, in _dygraph_call_func
      outputs = self.forward(*inputs, **kwargs)
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/nn/layer/rnn.py", line 1077, in forward
      return self._cudnn_impl(inputs, initial_states, sequence_length)
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/nn/layer/rnn.py", line 1051, in _cudnn_impl
      self._helper.append_op(
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/dygraph/layer_object_helper.py", line 48, in append_op
      return self.main_program.current_block().append_op(
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/framework.py", line 3615, in append_op
      op = Operator(
    File "/usr/local/python3/lib/python3.8/site-packages/paddle/fluid/framework.py", line 2635, in __init__
      for frame in traceback.extract_stack():

    ExternalError: CUDNN error(3), CUDNN_STATUS_BAD_PARAM. 
      [Hint: 'CUDNN_STATUS_BAD_PARAM'.  An incorrect value or parameter was passed to the function. To correct, ensure that all the parameters being passed have valid values.  ] (at /paddle/paddle/fluid/pla
tform/device/gpu/cuda/cudnn_helper.h:287)
      [operator < rnn > error]
yeyupiaoling commented 2 years ago

是不是音频太长了,太长就用长语音识别。 太短也会报这个错误。

wangcc57 commented 2 years ago

test_long.wav这个有3分钟都可以,我测试那个50秒就不行,我试试用长语音

yeyupiaoling commented 2 years ago

test_long.wav这个你用短语音识别也正常吗?不应该的。太长直接输入肯定会显存不足的

wangcc57 commented 2 years ago

正常的,P40直接跑test_long.wav不报错

yeyupiaoling commented 2 years ago

你换成长语音识别怎么样?

wangcc57 commented 2 years ago

长语音可以,这就不科学了,难道我的wav有毒吗 :(

yeyupiaoling commented 2 years ago

不可能吧,我还是觉得是因为音频太长的。因为输入太长也是类似的错误。

wangcc57 commented 2 years ago

不过问题不大,反正也是要截成单句处理的,Wenet数据集你训练过吗?需要什么样的资源要多久?

我测试了下Wenet数据集训练,发现第一个epoch的loss一直在涨,是哪里姿势不对吗?

yeyupiaoling commented 2 years ago

没训练过,第一个epoch训练时,数据集是从短到长排序的,所以这种情况是正常的。

wangcc57 commented 2 years ago

我还发现双卡比单卡还慢,有的时候还会卡住.

yeyupiaoling commented 2 years ago

不应该,你看剩余时间的吗?你看哪个日志?

wangcc57 commented 2 years ago

控制台没东西,然后models目录也没有保存的临时参数,都几个小时了,单卡都会存几次的

yeyupiaoling commented 2 years ago

多卡的日志是再log目录下的,对应每个卡的日志。

yeyupiaoling commented 2 years ago

你看看是不是正常训练。

wangcc57 commented 2 years ago

workerlog.0

[2022-07-15 09:37:00.937632] 训练数据:14660592 [2022-07-15 09:37:38.068987] Train epoch: [1/65], batch: [0/229071], loss: 85.23431, learning rate: 0.00005000, eta: 6398 days, 7:53:18 [2022-07-15 09:41:46.882346] Train epoch: [1/65], batch: [100/229071], loss: 14.01040, learning rate: 0.00005000, eta: 428 days, 9:31:19 [2022-07-15 09:45:55.890805] Train epoch: [1/65], batch: [200/229071], loss: 17.09803, learning rate: 0.00005000, eta: 427 days, 20:36:51 [2022-07-15 09:50:07.060608] Train epoch: [1/65], batch: [300/229071], loss: 16.94832, learning rate: 0.00005000, eta: 431 days, 13:10:36

workerlog.1

[2022-07-15 09:31:09.362761] 训练数据:14660592

下面这个是单卡的log [2022-07-15 09:55:03.052075] 训练数据:14660592 [2022-07-15 09:55:10.405565] Train epoch: [1/65], batch: [0/458143], loss: 85.80699, learning rate: 0.00005000, eta: 2534 days, 2:46:47 [2022-07-15 09:55:25.979604] Train epoch: [1/65], batch: [100/458143], loss: 15.74516, learning rate: 0.00005000, eta: 53 days, 14:39:28 [2022-07-15 09:55:41.944422] Train epoch: [1/65], batch: [200/458143], loss: 13.97528, learning rate: 0.00005000, eta: 54 days, 22:16:02 [2022-07-15 09:55:58.001585] Train epoch: [1/65], batch: [300/458143], loss: 18.45344, learning rate: 0.00005000, eta: 55 days, 5:36:10

wangcc57 commented 2 years ago

woker1没打日志,而且比单卡慢很多,我去爬下你的代码

yeyupiaoling commented 2 years ago

这么奇怪?你的两个卡一样的吗?

yeyupiaoling commented 2 years ago

是不是数据预处理跟不上?

wangcc57 commented 2 years ago

必须一样的,专门插了2个一样的卡

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.73.08    Driver Version: 510.73.08    CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla P40           Off  | 00000000:04:00.0 Off |                  Off |
| N/A   43C    P0    51W / 250W |      0MiB / 24576MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  Tesla P40           Off  | 00000000:83:00.0 Off |                  Off |
| N/A   41C    P0    50W / 250W |      0MiB / 24576MiB |      1%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
wangcc57 commented 2 years ago

GPU核心都100%了,数据应该没问题的

yeyupiaoling commented 2 years ago

2张卡都是100%吗?

yeyupiaoling commented 2 years ago

你分别用单卡测试2卡试试?你的卡1没有日志输出,有可能是它导致整体变慢的。

wangcc57 commented 2 years ago

我试试

wangcc57 commented 2 years ago
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.73.08    Driver Version: 510.73.08    CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla P40           Off  | 00000000:04:00.0 Off |                  Off |
| N/A   39C    P0    50W / 250W |      0MiB / 24576MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  Tesla P40           Off  | 00000000:83:00.0 Off |                  Off |
| N/A   53C    P0   200W / 250W |   4601MiB / 24576MiB |     95%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    1   N/A  N/A     12426      C   python3                          4599MiB |
+-----------------------------------------------------------------------------+

[2022-07-15 10:44:59.195437] 训练数据:14660592
[2022-07-15 10:45:04.379965] Train epoch: [1/65], batch: [0/458143], loss: 88.21230, learning rate: 0.00005000, eta: 1786 days, 13:44:50
[2022-07-15 10:45:17.785773] Train epoch: [1/65], batch: [100/458143], loss: 15.96248, learning rate: 0.00005000, eta: 46 days, 3:21:53
[2022-07-15 10:45:30.826052] Train epoch: [1/65], batch: [200/458143], loss: 13.98875, learning rate: 0.00005000, eta: 44 days, 20:11:12
[2022-07-15 10:45:43.916232] Train epoch: [1/65], batch: [300/458143], loss: 18.24256, learning rate: 0.00005000, eta: 45 days, 0:27:07
[2022-07-15 10:45:57.125810] Train epoch: [1/65], batch: [400/458143], loss: 17.17925, learning rate: 0.00005000, eta: 45 days, 10:49:58
[2022-07-15 10:46:10.207853] Train epoch: [1/65], batch: [500/458143], loss: 12.79642, learning rate: 0.00005000, eta: 45 days, 0:00:46

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.73.08    Driver Version: 510.73.08    CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla P40           Off  | 00000000:04:00.0 Off |                  Off |
| N/A   51C    P0   154W / 250W |   4601MiB / 24576MiB |     97%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  Tesla P40           Off  | 00000000:83:00.0 Off |                  Off |
| N/A   47C    P0    51W / 250W |      0MiB / 24576MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A     12622      C   python3                          4599MiB |
+-----------------------------------------------------------------------------+

[2022-07-15 10:48:46.283469] 训练数据:14660592
[2022-07-15 10:48:52.645588] Train epoch: [1/65], batch: [0/458143], loss: 87.22278, learning rate: 0.00005000, eta: 2192 days, 3:52:18
[2022-07-15 10:49:05.040063] Train epoch: [1/65], batch: [100/458143], loss: 15.78344, learning rate: 0.00005000, eta: 42 days, 15:40:24
[2022-07-15 10:49:17.719734] Train epoch: [1/65], batch: [200/458143], loss: 14.05792, learning rate: 0.00005000, eta: 43 days, 15:09:58
[2022-07-15 10:49:30.346088] Train epoch: [1/65], batch: [300/458143], loss: 18.16205, learning rate: 0.00005000, eta: 43 days, 10:08:28
wangcc57 commented 2 years ago

分开单独跑都正常,多卡一起跑就慢还卡住

yeyupiaoling commented 2 years ago

这么神奇?我使用一般会快一半的时间。你试试设置读取数据线程16试试看、你CPU应该不少于16核吧? https://github.com/yeyupiaoling/PPASR/blob/e4ed0f821cfdaaf55741650b4db601cce53378c0/train.py#L10

wangcc57 commented 2 years ago

这个我已经改成32了,CPU是32核的

yeyupiaoling commented 2 years ago

你其他的并行任务正常吗?你的nccl版本是多少?

wangcc57 commented 2 years ago

nccl-local-repo-rhel7-2.12.12-cuda11.7-1.0-1.x86_64 我在怀疑cuda的版本问题,GPU显示是11.6的但是装了11.7的也能用,我换成11.6试试

yeyupiaoling commented 2 years ago

wangcc57 commented 2 years ago

还是一样的,GPU跑满,但是很慢

yeyupiaoling commented 2 years ago

这也太奇怪了。你直接用deepspeech试试看。

yeyupiaoling commented 2 years ago

你在其他的多卡任务中,有没有正常?

wangcc57 commented 2 years ago

服务器周末维护了,还没测试.

yeyupiaoling commented 2 years ago

突然想起你单卡和双卡用的是同一个模型吗?要查看你完整训练一个epoch所用的时间。