ChengpengChen / RepGhost

RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization
MIT License
168 stars 17 forks source link

strange output when conveted into MNN #4

Closed Yuhyeong closed 1 year ago

Yuhyeong commented 1 year ago

I firstly convert the pretrained weight RepGhost_1_0x in to ONNX. This step is correct for I compare the 2 models' output with the same input (5, 3, 224, 224).

And then convert it into MNN. But I finally found the MNN one give the output as a (5000, ) list.

After I reshape it into (5, 1000) ,which is the same as the ONNX output. It turns out that the MNN's (5, 1000) answer is different from the ONNX output.

Only the index 0 of the (5, 1000) is the same as ONNX‘s.

ChengpengChen commented 1 year ago

Hi, Yuhyeong,

After checking the converted MNN model, I found that this should be caused by the last FC layer of the model. It should be a bug of MNN when converting FC layers with batch size > 1.

We just use MNN for latency evaluation and do not support the consistency when converting models. What you can do now:

  1. replace the last FC layer of the model with a 1x1 conv layer, or
  2. test the MNN model with batch size = 1