ultralytics / hub

Ultralytics HUB tutorials and support
https://hub.ultralytics.com
GNU Affero General Public License v3.0
125 stars 11 forks source link

TensorFlow Lite model deployment results #611

Closed jamesleech89 closed 4 months ago

jamesleech89 commented 6 months ago

Search before asking

Question

I have a new YOLO v5 object detection model trained on some new images with two classes. The model performs well in the Ultralytics preview tab and when I call the API for the model. I have exported a version of the model in a tensorflow lite format and applied it to some of the same images that I passed to the preview tab and through the API. Using the same confidence threshold and applying a non-maximum suppression using the same IoU threshold gives me different results to that which I get through the API and the preview tab? Even if I remove the non-maximum suppresion application I have on top of the tensorflow lite model, I can clearly see that the predictions are different (i.e. additional boxes predicted for classes that are not at all predicted through the API/preview tab).

Additional

No response

github-actions[bot] commented 6 months ago

👋 Hello @jamesleech89, thank you for raising an issue about Ultralytics HUB 🚀! Please visit our HUB Docs to learn more:

If this is a 🐛 Bug Report, please provide screenshots and steps to reproduce your problem to help us get started working on a fix.

If this is a ❓ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response.

We try to respond to all issues as promptly as possible. Thank you for your patience!

UltralyticsAssistant commented 6 months ago

@jamesleech89 hello! It sounds like you're experiencing discrepancies between your model's predictions in TensorFlow Lite format compared to its performance via the Ultralytics API and preview tab. This can happen due to several reasons:

  1. Quantization: If your TensorFlow Lite model is quantized, it might introduce slight precision losses, affecting the model's predictions. This is a common trade-off for the reduced model size and faster inference times on edge devices.

  2. Preprocessing and Postprocessing: Ensure that the image preprocessing (resizing, normalization) and postprocessing (applying confidence thresholds and non-maximum suppression) steps are consistent across all platforms. Differences in these steps can lead to varied results.

  3. Model Version: Double-check that the TensorFlow Lite model is exported from the exact same model version and weights as the one used in the Ultralytics preview tab and API.

  4. Framework Differences: Sometimes, subtle differences in how frameworks handle operations can lead to discrepancies. TensorFlow Lite might handle certain operations differently than PyTorch, which Ultralytics models are originally implemented in.

For a detailed guide on exporting models and ensuring consistency across different platforms, please refer to the Ultralytics HUB Docs. If the issue persists, consider providing more details about the preprocessing and postprocessing steps, along with any specific settings used during the TensorFlow Lite model export. This will help in diagnosing the issue more accurately. 😊

github-actions[bot] commented 5 months ago

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐