surendramaran / YOLOv8-TfLite-Object-Detector

A sample android application of live object detection for any YOLOv8 detection model
https://www.surendramaran.com/
68 stars 22 forks source link

How to adapt the demo code for running a YOLOv8 model with int8 quantization? #24

Open sid-022 opened 1 week ago

sid-022 commented 1 week ago

Hi, I have quantized a YOLOv8 model to int8 parameters. Could you please guide me on how to modify the demo code to make it compatible for running with the int8 quantized model?

screenshot-20240911-200355

sid-022 commented 1 week ago

I used NNAPI to accelerate inference for my int8 quantized detection model, but I noticed a significant accuracy drop compared to the fp32 version. Do you know how to resolve this issue? 20240911-203951

surendramaran commented 1 week ago

It is quite universal thing that small models runs good in CPU, for me using NNAPI doesn't always help in speed.

On Wed, 11 Sep, 2024, 6:11 PM Sid_022, @.***> wrote:

I used NNAPI to accelerate inference for my int8 quantized detection model, but I noticed a significant accuracy drop compared to the fp32 version. Do you know how to resolve this issue? 20240911-203951.jpg (view on web) https://github.com/user-attachments/assets/0a628bf4-439b-4391-a7c3-285499d936f8

— Reply to this email directly, view it on GitHub https://github.com/surendramaran/YOLOv8-TfLite-Object-Detector/issues/24#issuecomment-2343562067, or unsubscribe https://github.com/notifications/unsubscribe-auth/APXHH4IUBQPDVSMVLLG2SI3ZWA26ZAVCNFSM6AAAAABOAZ3ZMWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNBTGU3DEMBWG4 . You are receiving this because you are subscribed to this thread.Message ID: <surendramaran/YOLOv8-TfLite-Object-Detector/issues/24/2343562067@ github.com>

sid-022 commented 1 week ago

But I ran the model with the CPU of the mobile phone, and the accuracy is still the same, and the result of the PC test has dropped a lot, do you know how to solve it?