Closed vyang2968 closed 1 year ago
While this will not help with software designed around the expectation of TFLite_Detection_PostProcess
being the output, this guide on StackOverflow (https://stackoverflow.com/questions/65824714/process-output-data-from-yolov5-tflite/65953473#65953473) shows how to parse the output tensor data. This should work with the pycoral library, except for the functions in pycoral.adapters.detect as they have the expectation that the model has TFLite_Detection_PostProcess
.
@vyang2968 hi, thank you for your feature suggestion on how to improve YOLOv5 🚀!
The fastest and easiest way to incorporate your ideas into the official codebase is to submit a Pull Request (PR) implementing your idea, and if applicable providing before and after profiling/inference/training results to help us understand the improvement your feature provides. This allows us to directly see the changes in the code and to understand how they affect workflows and performance.
Please see our ✅ Contributing Guide to get started.
Unfortunately, I am unable to submit a PR because as of now, I do not have the capabilities to do this.
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.
Access additional YOLOv5 🚀 resources:
Access additional Ultralytics ⚡ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!
@vyang2968 Understood, submitting a PR is voluntary. Meanwhile, were you able to find the resources provided helpful for parsing the output tensor data as a workaround for your use case? Let me know if you need any further assistance!
Search before asking
Description
When exporting the
yolov5s.pt
to EdgeTPU withpython export.py --weights yolov5s.pt --include edgetpu
, it outputsyolov5s-int8_edgetpu.tflite
. Inspecting with Netron, the model has 1 output with dimension[1,6300,85]
. With Coral pretrained models, they have 4 outputs coming from aTFLite_Detection_PostProcess
node. Just like the exported yolov5s model, the outputs are aStatefulPartitionedCall
, but their output dimensions are also different. In the case of the TF2 SSD MobileNet V2 model, it has 4 ouputs with dimensions[1,20]
,[1]
,[1,20,4]
, and[1,20]
. Such a format allows the model to easily be used with theedgetpu_detect_server
command on the Dev Board and the pycoral library.Use case
Users of Google's Dev Board and Dev Board Mini will enjoy a "plug and play" type model that can be easily used in the
edgetpu_detect
andedgetpu_detect_server
command as the--model
argument. It will be easily able to extract the inputs and outputs from it and detect objects in the video stream. In addition, users of Google's Pycoral library will have better access to its easy to use functions.Additional
No response
Are you willing to submit a PR?