Closed stephaneburel-cea closed 3 years ago
Hi,
The C
export does not support multi-branches quantized models like MobileNet V2, as it requires rescaling layers that are not exported in the C
export. As the C
export is deprecated, I suggest you use the novel CPP
export, which works fine for this model.
Cheers, Olivier
Hello.
The ONNX importation of a MobileNetv2 works well :
n2d2.sh "$N2D2_MODELS/MobileNet_v2_ONNX_pytorch.ini"-seed 1 -test -w /dev/null (With onnx model from https://github.com/onnx/models/raw/master/vision/classification/mobilenet/model/mobilenetv2-7.onnx) ( MobileNet_v2_ONNX_pytorch.ini file from N2D2 repository)
Give a good accuracy so I understand that ONNX import works well.
But quantified export give a bad accuracy. n2d2.sh "$N2D2_MODELS/MobileNet_v2_ONNX_pytorch.ini"-seed 1 -w /dev/null -export C -calib -1 The accuracy is close to random guessing (0% to 0.1%)
Note : This method was tested with MobileNet_v1 (ini and onnx file from same source). With MobileNet_v1, the exported netwok give a top-1 accuracy close to 50 %. It seems that the problem is related to MobileNet_v2 features.
Regards.