Tencent / ncnn

ncnn is a high-performance neural network inference framework optimized for the mobile platform
Other
20.31k stars 4.15k forks source link

转换好的模型进行模型推导extractor.extract卡在matmul层,请问是什么问题有什么办法解决吗 #5138

Open GloryKnight-K opened 11 months ago

GloryKnight-K commented 11 months ago

detail | 详细描述 | 詳細な説明

转换好的模型进行模型推导extractor.extract时卡住,打印发现是卡在matmul层 企业微信截图_16994359452417 企业微信截图_16994359988857

GloryKnight-K commented 11 months ago

有尝试通过pnnx转换,但是会报错不能推理 ./pnnx edgenext3.pt inputshape=[1,3,112,112] pnnxparam = edgenext3.pnnx.param pnnxbin = edgenext3.pnnx.bin pnnxpy = edgenext3_pnnx.py pnnxonnx = edgenext3.pnnx.onnx ncnnparam = edgenext3.ncnn.param ncnnbin = edgenext3.ncnn.bin ncnnpy = edgenext3_ncnn.py fp16 = 1 optlevel = 2 device = cpu inputshape = [1,3,112,112]f32 inputshape2 = customop = moduleop = ############# pass_level0 inline module = models.conv_encoder.ConvEncoder inline module = models.layers.LayerNorm inline module = models.layers.PositionalEncodingFourier inline module = models.sdta_encoder.SDTAEncoder inline module = models.sdta_encoder.XCA inline module = timm.models.layers.drop.DropPath inline module = torch.nn.modules.linear.Identity inline module = models.conv_encoder.ConvEncoder inline module = models.layers.LayerNorm inline module = models.layers.PositionalEncodingFourier inline module = models.sdta_encoder.SDTAEncoder inline module = models.sdta_encoder.XCA inline module = timm.models.layers.drop.DropPath inline module = torch.nn.modules.linear.Identity

############# pass_level1 ############# pass_level2 ############# pass_level3 ############# pass_level4 ############# pass_level5 ############# pass_ncnn force batch axis 233 for operand 2 force batch axis 233 for operand 15 force batch axis 233 for operand 25 force batch axis 233 for operand 39 force batch axis 233 for operand 49 force batch axis 233 for operand 59 force batch axis 233 for operand 60 force batch axis 233 for operand 106 force batch axis 233 for operand 116 force batch axis 233 for operand 126 force batch axis 233 for operand 136 force batch axis 233 for operand 146 force batch axis 233 for operand 156 force batch axis 233 for operand 166 force batch axis 233 for operand 176 force batch axis 233 for operand 186 force batch axis 233 for operand 187 force batch axis 233 for operand 235 force batch axis 233 for operand 245 force batch axis 233 for operand 255 force batch axis 233 for operand 256 binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported binaryop broadcast across batch axis 233 and 0 is not supported insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 insert_reshape_linear 4 unsupported normalize for 3-rank tensor with axis 2 unsupported normalize for 3-rank tensor with axis 2 unsupported normalize for 3-rank tensor with axis 2 unsupported normalize for 3-rank tensor with axis 2 unsupported normalize for 3-rank tensor with axis 2 unsupported normalize for 3-rank tensor with axis 2

Mactarvish commented 10 months ago

检查一下matmul的输入形状是否正确

nihui commented 2 months ago

针对onnx模型转换的各种问题,推荐使用最新的pnnx工具转换到ncnn In view of various problems in onnx model conversion, it is recommended to use the latest pnnx tool to convert your model to ncnn

pip install pnnx
pnnx model.onnx inputshape=[1,3,224,224]

详细参考文档 Detailed reference documentation https://github.com/pnnx/pnnx https://github.com/Tencent/ncnn/wiki/use-ncnn-with-pytorch-or-onnx#how-to-use-pnnx

lxh0510 commented 4 weeks ago

解决了嘛