Open MECHAAMER opened 1 day ago
Try using mobilenet v2 ssd
On Thu 10. Oct 2024 at 16:24, MECHAAMER @.***> wrote:
Hello everyone,
I'm working on training a Yolo model for object detection and plan to use a Google Coral Dev Board for inference. As the Coral documentation recommends, the model should be in the TFLite format with 8-bit quantization for optimal performance.
Thanks to Ultralytics, exporting the model to the required format is straightforward:
from ultralytics import YOLO model = YOLO("pre_trained_model.pt")
Export the model to TFLite Edge TPU format
model.export(format="edgetpu")
In the output, I see:
Number of operations that will run on Edge TPU: 425 Number of operations that will run on CPU: 24
My question is: Can I do anything to make all operations run on the TPU for faster processing?
Additionally, are there any other recommended models that might offer better accuracy and lower latency on a Google Coral board?
Thanks all.
— Reply to this email directly, view it on GitHub https://github.com/google-coral/edgetpu/issues/868, or unsubscribe https://github.com/notifications/unsubscribe-auth/BHJBIODNICYHXAMZGBBTLVDZ22E2TAVCNFSM6AAAAABPW5MDNKVHI2DSMVQWIX3LMV43ASLTON2WKOZSGU3TSMBSGM3TGMA . You are receiving this because you are subscribed to this thread.Message ID: @.***>
is it better than Yolov8n?
depends on your specific use case, did you try running the yolov8n on the devboard already? what performance stats are you getting? inference times etc.
On Thu, Oct 10, 2024 at 4:32 PM MECHAAMER @.***> wrote:
Try using mobilenet v2 ssd … <#m-9171032474505234913> On Thu 10. Oct 2024 at 16:24, MECHAAMER @.> wrote: Hello everyone, I'm working on training a Yolo model for object detection and plan to use a Google Coral Dev Board for inference. As the Coral documentation recommends, the model should be in the TFLite format with 8-bit quantization for optimal performance. Thanks to Ultralytics, exporting the model to the required format is straightforward: from ultralytics import YOLO model = YOLO("pre_trained_model.pt http://pre_trained_model.pt") # Export the model to TFLite Edge TPU format model.export(format="edgetpu") In the output, I see: Number of operations that will run on Edge TPU: 425 Number of operations that will run on CPU: 24 My question is: Can I do anything to make all operations run on the TPU for faster processing? Additionally, are there any other recommended models that might offer better accuracy and lower latency on a Google Coral board? Thanks all. — Reply to this email directly, view it on GitHub <#868 https://github.com/google-coral/edgetpu/issues/868>, or unsubscribe https://github.com/notifications/unsubscribe-auth/BHJBIODNICYHXAMZGBBTLVDZ22E2TAVCNFSM6AAAAABPW5MDNKVHI2DSMVQWIX3LMV43ASLTON2WKOZSGU3TSMBSGM3TGMA https://github.com/notifications/unsubscribe-auth/BHJBIODNICYHXAMZGBBTLVDZ22E2TAVCNFSM6AAAAABPW5MDNKVHI2DSMVQWIX3LMV43ASLTON2WKOZSGU3TSMBSGM3TGMA . You are receiving this because you are subscribed to this thread.Message ID: @.>
is it better than Yolov8n?
— Reply to this email directly, view it on GitHub https://github.com/google-coral/edgetpu/issues/868#issuecomment-2405261466, or unsubscribe https://github.com/notifications/unsubscribe-auth/BHJBIOBHJ46Y7ND4WDZD7DDZ22FYRAVCNFSM6AAAAABPW5MDNKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBVGI3DCNBWGY . You are receiving this because you commented.Message ID: @.***>
Thanks for reply, I am outside office now and don't have access to the test data, however the inference was slow, need 0.14 sec per frame. I think the inference will be faster if I can run all the ops in the TPU
Hello everyone,
I'm working on training a Yolo model for object detection and plan to use a Google Coral Dev Board for inference. As the Coral documentation recommends, the model should be in the TFLite format with 8-bit quantization for optimal performance.
Thanks to Ultralytics, exporting the model to the required format is straightforward:
In the output, I see:
My question is: Can I do anything to make all operations run on the TPU for faster processing?
Additionally, are there any other recommended models that might offer better accuracy and lower latency on a Google Coral board?
Thanks all.