quic / qidk

Other
104 stars 23 forks source link

The accuracy results of the Inception_V3 quantized model running on the QCS6490 DSP were unexpected. #38

Closed HuiJu1218 closed 1 week ago

HuiJu1218 commented 1 month ago

Hi, I used the sample from this link and followed the tutorial script to download and quantize the model. When running the quantized model on the QCS6490, The accuracy drop much higher than expected. Below are the SDK version and SOP I used. I'd like to know if this is an issue, and how I can resolve it.

My workflow from model quantization on host and deployment to OCS6490 is as follows.

  1. Download inception v3 pb model.
  2. export TENSORFLOW_HOME
  3. quantize model and get DLC
    python3 $SNPE_ROOT/examples/Models/InceptionV3/scripts/setup_inceptionv3_snpe.py -a ~/tmpdir -d -r dsp
  4. Run quantized model on QCS6490's DSP
    snpe-net-run --container dlc/inception_v3_quantized.dlc --input_list data/cropped/raw_list.txt --use_dsp
  5. Use python script show_inceptionv3_classifications_snpe.py to check output

What I observed is that there are recognition results, but for the 1000 ImageNet classes, only fewer than 20 have any recognition results. As shown in the following image, the top recognition results are all 0.0. 螢幕擷取畫面 2024-10-03 161759

Thanks.

quic-vraidu commented 1 month ago

Refer Model-Accuracy-Mixed-Precision. This should help you with the process.