PaddlePaddle / Paddle

PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
http://www.paddlepaddle.org/
Apache License 2.0
21.92k stars 5.51k forks source link

LSTM使用GPU推理没错,使用CPU推理报维度错误 #44776

Closed yeyupiaoling closed 1 year ago

yeyupiaoling commented 1 year ago

bug描述 Describe the Bug

使用GPU推理是正常的,但是使用CPU推理就报下次错误了。

    File "D:\yeyupiaoling\PyCharm\PPASR\ppasr\model_utils\deepspeech2\rnn.py", line 65, in forward
      x, final_state = self.rnn[i](x, x_lens, init_state_list[i])
    File "E:\ProgramData\Anaconda3\envs\PaddlePaddle\lib\site-packages\paddle\fluid\dygraph\layers.py", line 930, in __call__
      return self._dygraph_call_func(*inputs, **kwargs)
    File "E:\ProgramData\Anaconda3\envs\PaddlePaddle\lib\site-packages\paddle\fluid\dygraph\layers.py", line 915, in _dygraph_call_func
      outputs = self.forward(*inputs, **kwargs)
    File "D:\yeyupiaoling\PyCharm\PPASR\ppasr\model_utils\deepspeech2\rnn.py", line 21, in forward
      x, final_state = self.rnn(x, init_state, x_lens)  # [B, T, D]
    File "E:\ProgramData\Anaconda3\envs\PaddlePaddle\lib\site-packages\paddle\fluid\dygraph\layers.py", line 930, in __call__
      return self._dygraph_call_func(*inputs, **kwargs)
    File "E:\ProgramData\Anaconda3\envs\PaddlePaddle\lib\site-packages\paddle\fluid\dygraph\layers.py", line 915, in _dygraph_call_func
      outputs = self.forward(*inputs, **kwargs)
    File "E:\ProgramData\Anaconda3\envs\PaddlePaddle\lib\site-packages\paddle\nn\layer\rnn.py", line 1077, in forward
      return self._cudnn_impl(inputs, initial_states, sequence_length)
    File "E:\ProgramData\Anaconda3\envs\PaddlePaddle\lib\site-packages\paddle\nn\layer\rnn.py", line 1052, in _cudnn_impl
      type="rnn", inputs=inputs, outputs=outputs, attrs=attrs)
    File "E:\ProgramData\Anaconda3\envs\PaddlePaddle\lib\site-packages\paddle\fluid\dygraph\layer_object_helper.py", line 53, in append_op
      stop_gradient=stop_gradient)
    File "E:\ProgramData\Anaconda3\envs\PaddlePaddle\lib\site-packages\paddle\fluid\framework.py", line 3621, in append_op
      attrs=kwargs.get("attrs", None))
    File "E:\ProgramData\Anaconda3\envs\PaddlePaddle\lib\site-packages\paddle\fluid\framework.py", line 2635, in __init__
      for frame in traceback.extract_stack():

    InvalidArgumentError: The fisrt matrix width should be same as second matrix height,but received fisrt matrix width 1024, second matrix height 2048
      [Hint: Expected dim_a.width_ == dim_b.height_, but received dim_a.width_:1024 != dim_b.height_:2048.] (at ..\paddle/phi/kernels/funcs/blas/blas_impl.h:2210)
      [operator < rnn > error]

代码片段

class RNNForward(nn.Layer):
    def __init__(self, rnn_input_size, h_size, use_gru):
        super().__init__()
        if use_gru:
            self.rnn = nn.GRU(input_size=rnn_input_size,
                              hidden_size=h_size,
                              direction="forward")
        else:
            self.rnn = nn.LSTM(input_size=rnn_input_size,
                               hidden_size=h_size,
                               direction="forward")
        self.norm = nn.LayerNorm(h_size)

    def forward(self, x, x_lens, init_state):
        x, final_state = self.rnn(x, init_state, x_lens)  # [B, T, D]
        x = self.norm(x)
        return x, final_state

其他补充信息 Additional Supplementary Information

No response

zh794390558 commented 1 year ago

dim_a.width_:1024 != dim_b.height_:2048.

打印下模型结构吧,估计是某个参数错误,导致lstm后是concat还是add之类的问题。

yeyupiaoling commented 1 year ago
------------------------------------------------------------------------------------------------------------------
 Layer (type)              Input Shape                             Output Shape                      Param #    
==================================================================================================================
   Conv2D-1            [[1, 1, 900, 161]]                        [1, 32, 449, 80]                      320      
    GELU-1             [[1, 32, 449, 80]]                        [1, 32, 449, 80]                       0       
   ConvBn-1          [[1, 1, 900, 161], [1]]                 [[1, 32, 449, 80], [1]]                    0       
   Conv2D-2            [[1, 32, 449, 80]]                        [1, 32, 224, 39]                     9,248     
    GELU-2             [[1, 32, 224, 39]]                        [1, 32, 224, 39]                       0       
   ConvBn-2          [[1, 32, 449, 80], [1]]                 [[1, 32, 224, 39], [1]]                    0       
  ConvStack-1         [[1, 161, 900], [1]]                    [[1, 224, 1248], [1]]                     0       
    LSTM-1         [[1, 224, 1248], None, [1]]    [[1, 224, 2048], [[1, 1, 2048], [1, 1, 2048]]]   27,017,216   
  LayerNorm-1           [[1, 224, 2048]]                          [1, 224, 2048]                      4,096     
 RNNForward-1      [[1, 224, 1248], [1], None]    [[1, 224, 2048], [[1, 1, 2048], [1, 1, 2048]]]        0       
    LSTM-2         [[1, 224, 2048], None, [1]]    [[1, 224, 2048], [[1, 1, 2048], [1, 1, 2048]]]   33,570,816   
  LayerNorm-2           [[1, 224, 2048]]                          [1, 224, 2048]                      4,096     
 RNNForward-2      [[1, 224, 2048], [1], None]    [[1, 224, 2048], [[1, 1, 2048], [1, 1, 2048]]]        0       
    LSTM-3         [[1, 224, 2048], None, [1]]    [[1, 224, 2048], [[1, 1, 2048], [1, 1, 2048]]]   33,570,816   
  LayerNorm-3           [[1, 224, 2048]]                          [1, 224, 2048]                      4,096     
 RNNForward-3      [[1, 224, 2048], [1], None]    [[1, 224, 2048], [[1, 1, 2048], [1, 1, 2048]]]        0       
    LSTM-4         [[1, 224, 2048], None, [1]]    [[1, 224, 2048], [[1, 1, 2048], [1, 1, 2048]]]   33,570,816   
  LayerNorm-4           [[1, 224, 2048]]                          [1, 224, 2048]                      4,096     
 RNNForward-4      [[1, 224, 2048], [1], None]    [[1, 224, 2048], [[1, 1, 2048], [1, 1, 2048]]]        0       
    LSTM-5         [[1, 224, 2048], None, [1]]    [[1, 224, 2048], [[1, 1, 2048], [1, 1, 2048]]]   33,570,816   
  LayerNorm-5           [[1, 224, 2048]]                          [1, 224, 2048]                      4,096     
 RNNForward-5      [[1, 224, 2048], [1], None]    [[1, 224, 2048], [[1, 1, 2048], [1, 1, 2048]]]        0       
  RNNStack-1    [[1, 224, 1248], [1], None, None]  [[1, 224, 2048], [5, 1, 2048], [5, 1, 2048]]         0       
   Linear-1             [[1, 224, 2048]]                          [1, 224, 6436]                   13,187,364   
==================================================================================================================
Total params: 174,517,892
Trainable params: 174,517,892
Non-trainable params: 0
------------------------------------------------------------------------------------------------------------------
Input size (MB): 0.55
Forward/backward pass size (MB): 102.31
Params size (MB): 665.73
Estimated Total Size (MB): 768.59
------------------------------------------------------------------------------------------------------------------
zh794390558 commented 1 year ago

模型是GRU的。关注下代码lstm/gru后面的调用是否符合预期。可能是配置文件问题,或者模型不匹配。

yeyupiaoling commented 1 year ago

模型代码在:https://github.com/yeyupiaoling/PPASR/tree/master/ppasr/model_utils/deepspeech2

yeyupiaoling commented 1 year ago

刚才打印是错的,重新打印了,用的是LSTM

yeyupiaoling commented 1 year ago

是我输入错误了paddle.nn.LSTM()中的initial_states维度输入不正确。修改好之后,GPU和CPU都能用了