alibaba / MNN

MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
http://www.mnn.zone/
8.63k stars 1.66k forks source link

Error is between /model.23/TopK and /model.23/Unsqueeze #2949

Closed helm-apprentice closed 3 weeks ago

helm-apprentice commented 3 months ago

平台(如果交叉编译请再附上交叉编译目标平台):

Platform(Include target platform as well if cross-compiling):

ubuntu20.04

Github版本:

Github Version: latest

编译日志:

Build Log:

python ../tools/script/testMNNFromOnnx.py yolov10n.onnx DEBUG Dir exist onnx/test.onnx tensor(float) ['output0'] inputs: images onnx/ outputs: onnx/output0.txt (1, 300, 6) onnx/ The device support i8sdot:0, support fp16:0, support i8mm: 0 Start to Convert Other Model Format To MNN Model..., target version: 2.9 [10:06:06] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:46: ONNX Model ir version: 7 [10:06:06] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:47: ONNX Model opset version: 13 [10:06:06] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.11/Resize_output_0 has empty input, the index is 1 [10:06:06] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.14/Resize_output_0 has empty input, the index is 1 Start to Optimize the MNN Net... inputTensors : [ images, ] outputTensors: [ output0, ] Converted Success! Check convert result by onnx, thredhold is 0.01 images output: output0 output0: (1, 300, 6, ) TESTERROR output0 value error : absMaxV:641.409790 - DiffMax 629.570007 Error for output output0 Save mnn result to .error director

The device support i8sdot:0, support fp16:0, support i8mm: 0 Start to Convert Other Model Format To MNN Model..., target version: 2.9 [10:06:06] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:46: ONNX Model ir version: 7 [10:06:06] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:47: ONNX Model opset version: 13 [10:06:06] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.11/Resize_output_0 has empty input, the index is 1 [10:06:06] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.14/Resize_output_0 has empty input, the index is 1 Start to Optimize the MNN Net... inputTensors : [ images, ] outputTensors: [ output0, ] Converted Success! Check convert result by onnx, thredhold is 0.01 images output: output0 output0: (1, 300, 6, ) TESTERROR output0 value error : absMaxV:641.409790 - DiffMax 629.570007 Error for output output0 Save mnn result to .error director

Debug Mode: True onnx/test.onnx tensor(float) ['/model.3/act/Mul_output_0'] inputs: images onnx/ outputs: onnx//model.3/act/Mul_output_0.txt (1, 64, 80, 80) onnx//model.3/act/ The device support i8sdot:0, support fp16:0, support i8mm: 0 Start to Convert Other Model Format To MNN Model..., target version: 2.9 [10:06:10] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:46: ONNX Model ir version: 7 [10:06:10] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:47: ONNX Model opset version: 13 [10:06:10] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.11/Resize_output_0 has empty input, the index is 1 [10:06:10] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.14/Resize_output_0 has empty input, the index is 1 Start to Optimize the MNN Net... inputTensors : [ images, ] outputTensors: [ /model.3/act/Mul_output_0, ] Converted Success! Check convert result by onnx, thredhold is 0.01 images output: /model.3/act/Mul_output_0 /model.3/act/Mul_output_0: (1, 64, 80, 80, ) TEST_SUCCESS

Test Node : /model.3/act/Mul True onnx/test.onnx tensor(float) ['/model.4/cv2/act/Mul_output_0'] inputs: images onnx/ outputs: onnx//model.4/cv2/act/Mul_output_0.txt (1, 64, 80, 80) onnx//model.4/cv2/act/ The device support i8sdot:0, support fp16:0, support i8mm: 0 Start to Convert Other Model Format To MNN Model..., target version: 2.9 [10:06:13] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:46: ONNX Model ir version: 7 [10:06:13] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:47: ONNX Model opset version: 13 [10:06:13] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.11/Resize_output_0 has empty input, the index is 1 [10:06:13] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.14/Resize_output_0 has empty input, the index is 1 Start to Optimize the MNN Net... inputTensors : [ images, ] outputTensors: [ /model.4/cv2/act/Mul_output_0, ] Converted Success! Check convert result by onnx, thredhold is 0.01 images output: /model.4/cv2/act/Mul_output_0 /model.4/cv2/act/Mul_output_0: (1, 64, 80, 80, ) TEST_SUCCESS

Test Node : /model.4/cv2/act/Mul True onnx/test.onnx tensor(float) ['/model.23/Concat_5_output_0'] inputs: images onnx/ outputs: onnx//model.23/Concat_5_output_0.txt (1, 84, 8400) onnx//model.23/ The device support i8sdot:0, support fp16:0, support i8mm: 0 Start to Convert Other Model Format To MNN Model..., target version: 2.9 [10:06:18] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:46: ONNX Model ir version: 7 [10:06:18] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:47: ONNX Model opset version: 13 [10:06:18] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.11/Resize_output_0 has empty input, the index is 1 [10:06:18] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.14/Resize_output_0 has empty input, the index is 1 Start to Optimize the MNN Net... inputTensors : [ images, ] outputTensors: [ /model.23/Concat_5_output_0, ] Converted Success! Check convert result by onnx, thredhold is 0.01 images output: /model.23/Concat_5_output_0 /model.23/Concat_5_output_0: (1, 84, 8400, ) TEST_SUCCESS

Test Node : /model.23/Concat_5 True onnx/test.onnx tensor(float) ['/model.23/Transpose_output_0'] inputs: images onnx/ outputs: onnx//model.23/Transpose_output_0.txt (1, 8400, 84) onnx//model.23/ The device support i8sdot:0, support fp16:0, support i8mm: 0 Start to Convert Other Model Format To MNN Model..., target version: 2.9 [10:06:22] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:46: ONNX Model ir version: 7 [10:06:22] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:47: ONNX Model opset version: 13 [10:06:22] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.11/Resize_output_0 has empty input, the index is 1 [10:06:22] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.14/Resize_output_0 has empty input, the index is 1 Start to Optimize the MNN Net... inputTensors : [ images, ] outputTensors: [ /model.23/Transpose_output_0, ] Converted Success! Check convert result by onnx, thredhold is 0.01 images output: /model.23/Transpose_output_0 /model.23/Transpose_output_0: (1, 8400, 84, ) TEST_SUCCESS

Test Node : /model.23/Transpose True onnx/test.onnx tensor(float) ['/model.23/Split_2_output_0'] inputs: images onnx/ outputs: onnx//model.23/Split_2_output_0.txt (1, 8400, 4) onnx//model.23/ The device support i8sdot:0, support fp16:0, support i8mm: 0 Start to Convert Other Model Format To MNN Model..., target version: 2.9 [10:06:26] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:46: ONNX Model ir version: 7 [10:06:26] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:47: ONNX Model opset version: 13 [10:06:26] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.11/Resize_output_0 has empty input, the index is 1 [10:06:26] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.14/Resize_output_0 has empty input, the index is 1 Start to Optimize the MNN Net... inputTensors : [ images, ] outputTensors: [ /model.23/Split_2_output_0, ] Converted Success! Check convert result by onnx, thredhold is 0.01 images output: /model.23/Split_2_output_0 /model.23/Split_2_output_0: (1, 8400, 4, ) TEST_SUCCESS

Test Node : /model.23/Split_2 True Error is between /model.23/Split_2 and /model.23/Concat_6 onnx/test.onnx tensor(float) ['/model.23/GatherElements_2_output_0'] inputs: images onnx/ outputs: onnx//model.23/GatherElements_2_output_0.txt (1, 300, 4) onnx//model.23/ The device support i8sdot:0, support fp16:0, support i8mm: 0 Start to Convert Other Model Format To MNN Model..., target version: 2.9 [10:06:29] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:46: ONNX Model ir version: 7 [10:06:29] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:47: ONNX Model opset version: 13 [10:06:29] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.11/Resize_output_0 has empty input, the index is 1 [10:06:29] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.14/Resize_output_0 has empty input, the index is 1 Start to Optimize the MNN Net... inputTensors : [ images, ] outputTensors: [ /model.23/GatherElements_2_output_0, ] Converted Success! Check convert result by onnx, thredhold is 0.01 images output: /model.23/GatherElements_2_output_0 /model.23/GatherElements_2_output_0: (1, 300, 4, ) TESTERROR /model.23/GatherElements_2_output_0 value error : absMaxV:654.897217 - DiffMax 632.648193 Error for output /model.23/GatherElements_2_output_0 Save mnn result to .error director

Test Node : /model.23/GatherElements_2 False onnx/test.onnx tensor(float) ['/model.23/GatherElements_output_0'] inputs: images onnx/ outputs: onnx//model.23/GatherElements_output_0.txt (1, 300, 4) onnx//model.23/ The device support i8sdot:0, support fp16:0, support i8mm: 0 Start to Convert Other Model Format To MNN Model..., target version: 2.9 [10:06:32] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:46: ONNX Model ir version: 7 [10:06:32] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:47: ONNX Model opset version: 13 [10:06:32] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.11/Resize_output_0 has empty input, the index is 1 [10:06:32] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.14/Resize_output_0 has empty input, the index is 1 Start to Optimize the MNN Net... inputTensors : [ images, ] outputTensors: [ /model.23/GatherElements_output_0, ] Converted Success! Check convert result by onnx, thredhold is 0.01 images output: /model.23/GatherElements_output_0 /model.23/GatherElements_output_0: (1, 300, 4, ) TESTERROR /model.23/GatherElements_output_0 value error : absMaxV:659.676880 - DiffMax 633.775269 Error for output /model.23/GatherElements_output_0 Save mnn result to .error director

Test Node : /model.23/GatherElements False onnx/test.onnx tensor(float) ['/model.23/Split_2_output_0'] inputs: images onnx/ outputs: onnx//model.23/Split_2_output_0.txt (1, 8400, 4) onnx//model.23/ The device support i8sdot:0, support fp16:0, support i8mm: 0 Start to Convert Other Model Format To MNN Model..., target version: 2.9 [10:06:35] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:46: ONNX Model ir version: 7 [10:06:35] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:47: ONNX Model opset version: 13 [10:06:35] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.11/Resize_output_0 has empty input, the index is 1 [10:06:35] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.14/Resize_output_0 has empty input, the index is 1 Start to Optimize the MNN Net... inputTensors : [ images, ] outputTensors: [ /model.23/Split_2_output_0, ] Converted Success! Check convert result by onnx, thredhold is 0.01 images output: /model.23/Split_2_output_0 /model.23/Split_2_output_0: (1, 8400, 4, ) TEST_SUCCESS

Test Node : /model.23/Split_2 True onnx/test.onnx tensor(float) ['/model.23/Tile_output_0'] inputs: images onnx/ outputs: onnx//model.23/Tile_output_0.txt (1, 300, 4) onnx//model.23/ The device support i8sdot:0, support fp16:0, support i8mm: 0 Start to Convert Other Model Format To MNN Model..., target version: 2.9 [10:06:39] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:46: ONNX Model ir version: 7 [10:06:39] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:47: ONNX Model opset version: 13 [10:06:39] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.11/Resize_output_0 has empty input, the index is 1 [10:06:39] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.14/Resize_output_0 has empty input, the index is 1 Start to Optimize the MNN Net... inputTensors : [ images, ] outputTensors: [ /model.23/Tile_output_0, ] Converted Success! Check convert result by onnx, thredhold is 0.01 images output: /model.23/Tile_output_0 /model.23/Tile_output_0: (1, 300, 4, ) TESTERROR /model.23/Tile_output_0 value error : absMaxV:8390.000000 - DiffMax 8361.000000 Error for output /model.23/Tile_output_0 Save mnn result to .error director

Test Node : /model.23/Tile False onnx/test.onnx tensor(float) ['/model.23/TopK_output_0'] inputs: images onnx/ outputs: onnx//model.23/TopK_output_0.txt (1, 300) onnx//model.23/ The device support i8sdot:0, support fp16:0, support i8mm: 0 Start to Convert Other Model Format To MNN Model..., target version: 2.9 [10:06:42] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:46: ONNX Model ir version: 7 [10:06:42] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:47: ONNX Model opset version: 13 [10:06:42] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.11/Resize_output_0 has empty input, the index is 1 [10:06:42] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.14/Resize_output_0 has empty input, the index is 1 Start to Optimize the MNN Net... inputTensors : [ images, ] outputTensors: [ /model.23/TopK_output_0, ] Converted Success! Check convert result by onnx, thredhold is 0.01 images output: /model.23/TopK_output_0 /model.23/TopK_output_0: (1, 300, ) TEST_SUCCESS

Test Node : /model.23/TopK True onnx/test.onnx tensor(float) ['/model.23/Unsqueeze_output_0'] inputs: images onnx/ outputs: onnx//model.23/Unsqueeze_output_0.txt (1, 300, 1) onnx//model.23/ The device support i8sdot:0, support fp16:0, support i8mm: 0 Start to Convert Other Model Format To MNN Model..., target version: 2.9 [10:06:45] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:46: ONNX Model ir version: 7 [10:06:45] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:47: ONNX Model opset version: 13 [10:06:45] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.11/Resize_output_0 has empty input, the index is 1 [10:06:45] /home/helm/MNN/tools/converter/source/onnx/onnxConverter.cpp:146: Check it out ==> /model.14/Resize_output_0 has empty input, the index is 1 Start to Optimize the MNN Net... inputTensors : [ images, ] outputTensors: [ /model.23/Unsqueeze_output_0, ] Converted Success! Check convert result by onnx, thredhold is 0.01 images output: /model.23/Unsqueeze_output_0 /model.23/Unsqueeze_output_0: (1, 300, 1, ) TESTERROR /model.23/Unsqueeze_output_0 value error : absMaxV:8389.000000 - DiffMax 8186.000000 Error for output /model.23/Unsqueeze_output_0 Save mnn result to .error director

Test Node : /model.23/Unsqueeze False Error is between /model.23/TopK and /model.23/Unsqueeze``` 粘贴在这里 Paste log here or pastebin

v0jiuqi commented 3 months ago

请附上模型便于我们排查

github-actions[bot] commented 1 month ago

Marking as stale. No activity in 60 days.