ultralytics / hub

Ultralytics HUB tutorials and support
https://hub.ultralytics.com
GNU Affero General Public License v3.0
137 stars 13 forks source link

INT 8 quantization is not implemented when selected to export TensorFlow Lite #748

Closed franciscocostela closed 3 months ago

franciscocostela commented 4 months ago

Search before asking

HUB Component

Export

Bug

I trained an Object Detection Model using both YoloV5n and YoloV8n. In the Deploy tab, I selected TensorFlow Lite - advanced and selected 'Int 8 Quantization'. Then I clicked Export and downloaded the model when the button Download became available. Screenshot 2024-06-06 at 9 25 05 AM However, when I inspect the file, it looks like the quantization is never applied. This happens for both YoloV5n and YoloV8n image Is there anything that I am not doing correctly or is this really a bug?

Environment

No response

Minimal Reproducible Example

No response

Additional

No response

github-actions[bot] commented 4 months ago

👋 Hello @franciscocostela, thank you for raising an issue about Ultralytics HUB 🚀! Please visit our HUB Docs to learn more:

If this is a 🐛 Bug Report, please provide screenshots and steps to reproduce your problem to help us get started working on a fix.

If this is a ❓ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response.

We try to respond to all issues as promptly as possible. Thank you for your patience!

sergiuwaxmann commented 4 months ago

@franciscocostela Hello! Can you check if the exported model size is 4x smaller than the fp32 one?

franciscocostela commented 4 months ago

Hi Sergiu,

Yes - It is about 4x smaller. These are the sizes of the files: Original FP32 - 11.6MB Fp16 - 5.8MB Int8 - 3.0MB

I am trying to run the TFLite file through a conversion pipeline to deploy it into a camera but it fails with an error message about the file not being quantized. When I inspect it with Netron, I see that the quantization bias is FLOAT32. INT8 is used in some of the convolution layers but not all of them (see screenshot). This seems to trigger the error message using the conversion pipeline.

sergiuwaxmann commented 4 months ago

@franciscocostela Based on the file size, the quantization is applied.

github-actions[bot] commented 3 months ago

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐