google-coral / edgetpu

Coral issue tracker (and legacy Edge TPU API source)
https://coral.ai
Apache License 2.0
429 stars 125 forks source link

Google coral dev board - object detection using Yolo #868

Open MECHAAMER opened 1 month ago

MECHAAMER commented 1 month ago

Hello everyone,

I'm working on training a Yolo model for object detection and plan to use a Google Coral Dev Board for inference. As the Coral documentation recommends, the model should be in the TFLite format with 8-bit quantization for optimal performance.

Thanks to Ultralytics, exporting the model to the required format is straightforward:

from ultralytics import YOLO
model = YOLO("pre_trained_model.pt")

# Export the model to TFLite Edge TPU format
model.export(format="edgetpu")

In the output, I see:

Number of operations that will run on Edge TPU: 425
Number of operations that will run on CPU: 24

My question is: Can I do anything to make all operations run on the TPU for faster processing?

Additionally, are there any other recommended models that might offer better accuracy and lower latency on a Google Coral board?

Thanks all.

ajbolt69 commented 1 month ago

Try using mobilenet v2 ssd

On Thu 10. Oct 2024 at 16:24, MECHAAMER @.***> wrote:

Hello everyone,

I'm working on training a Yolo model for object detection and plan to use a Google Coral Dev Board for inference. As the Coral documentation recommends, the model should be in the TFLite format with 8-bit quantization for optimal performance.

Thanks to Ultralytics, exporting the model to the required format is straightforward:

from ultralytics import YOLO model = YOLO("pre_trained_model.pt")

Export the model to TFLite Edge TPU format

model.export(format="edgetpu")

In the output, I see:

Number of operations that will run on Edge TPU: 425 Number of operations that will run on CPU: 24

My question is: Can I do anything to make all operations run on the TPU for faster processing?

Additionally, are there any other recommended models that might offer better accuracy and lower latency on a Google Coral board?

Thanks all.

— Reply to this email directly, view it on GitHub https://github.com/google-coral/edgetpu/issues/868, or unsubscribe https://github.com/notifications/unsubscribe-auth/BHJBIODNICYHXAMZGBBTLVDZ22E2TAVCNFSM6AAAAABPW5MDNKVHI2DSMVQWIX3LMV43ASLTON2WKOZSGU3TSMBSGM3TGMA . You are receiving this because you are subscribed to this thread.Message ID: @.***>

MECHAAMER commented 1 month ago

is it better than Yolov8n?

ajbolt69 commented 1 month ago

depends on your specific use case, did you try running the yolov8n on the devboard already? what performance stats are you getting? inference times etc.

On Thu, Oct 10, 2024 at 4:32 PM MECHAAMER @.***> wrote:

Try using mobilenet v2 ssd … <#m-9171032474505234913> On Thu 10. Oct 2024 at 16:24, MECHAAMER @.> wrote: Hello everyone, I'm working on training a Yolo model for object detection and plan to use a Google Coral Dev Board for inference. As the Coral documentation recommends, the model should be in the TFLite format with 8-bit quantization for optimal performance. Thanks to Ultralytics, exporting the model to the required format is straightforward: from ultralytics import YOLO model = YOLO("pre_trained_model.pt http://pre_trained_model.pt") # Export the model to TFLite Edge TPU format model.export(format="edgetpu") In the output, I see: Number of operations that will run on Edge TPU: 425 Number of operations that will run on CPU: 24 My question is: Can I do anything to make all operations run on the TPU for faster processing? Additionally, are there any other recommended models that might offer better accuracy and lower latency on a Google Coral board? Thanks all. — Reply to this email directly, view it on GitHub <#868 https://github.com/google-coral/edgetpu/issues/868>, or unsubscribe https://github.com/notifications/unsubscribe-auth/BHJBIODNICYHXAMZGBBTLVDZ22E2TAVCNFSM6AAAAABPW5MDNKVHI2DSMVQWIX3LMV43ASLTON2WKOZSGU3TSMBSGM3TGMA https://github.com/notifications/unsubscribe-auth/BHJBIODNICYHXAMZGBBTLVDZ22E2TAVCNFSM6AAAAABPW5MDNKVHI2DSMVQWIX3LMV43ASLTON2WKOZSGU3TSMBSGM3TGMA . You are receiving this because you are subscribed to this thread.Message ID: @.>

is it better than Yolov8n?

— Reply to this email directly, view it on GitHub https://github.com/google-coral/edgetpu/issues/868#issuecomment-2405261466, or unsubscribe https://github.com/notifications/unsubscribe-auth/BHJBIOBHJ46Y7ND4WDZD7DDZ22FYRAVCNFSM6AAAAABPW5MDNKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBVGI3DCNBWGY . You are receiving this because you commented.Message ID: @.***>

MECHAAMER commented 1 month ago

Thanks for reply, I am outside office now and don't have access to the test data, however the inference was slow, need 0.14 sec per frame. I think the inference will be faster if I can run all the ops in the TPU

MECHAAMER commented 1 month ago

Yes I tired to run the Yolov8n, the accuracy almost good, but the latency high, 5.58 frames/second only, do you think mobilenet v2 ssd accuracy will be faster and more accurate?

ajbolt69 commented 1 month ago

I don’t have the stats to comment on accuracy , but inference time will be much faster , it’s easily running at 30fps

On Sun 13. Oct 2024 at 10:13, MECHAAMER @.***> wrote:

Yes I tired to run the Yolov8n, the accuracy almost good, but the latency high, 5.58 frames/second only, do you think mobilenet v2 ssd accuracy will be faster and more accurate?

— Reply to this email directly, view it on GitHub https://github.com/google-coral/edgetpu/issues/868#issuecomment-2408877159, or unsubscribe https://github.com/notifications/unsubscribe-auth/BHJBIODXIAFE2QD3DY3JICTZ3ITSNAVCNFSM6AAAAABPW5MDNKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBYHA3TOMJVHE . You are receiving this because you commented.Message ID: @.***>

MECHAAMER commented 1 month ago

I read this sentence on the coral official document:

"If part of your model executes on the CPU, you should expect a significantly degraded inference speed compared to a model that executes entirely on the Edge TPU. We cannot predict how much slower your model will perform in this situation, so you should experiment with different architectures and strive to create a model that is 100% compatible with the Edge TPU. That is, your compiled model should contain only the Edge TPU custom operation."

you suggest use mobilenet v2 ssd, do you have another recommendation for a models that can work 100% on the TPU (object detection use)?

ajbolt69 commented 1 month ago

do you have any compatibility issues running the mobilenet v2 ssd ? i suggest you try it out for yourself and decide if you like it or not, in my opinion it runs smoothly , optimised for the tpu , thats the reason https://github.com/google-coral/examples-camera

On Sun, Oct 13, 2024 at 10:43 AM MECHAAMER @.***> wrote:

I read this sentence on the coral official document:

"If part of your model executes on the CPU, you should expect a significantly degraded inference speed compared to a model that executes entirely on the Edge TPU. We cannot predict how much slower your model will perform in this situation, so you should experiment with different architectures and strive to create a model that is 100% compatible with the Edge TPU. That is, your compiled model should contain only the Edge TPU custom operation."

you suggest use mobilenet v2 ssd, do you have another recommendation for a models that can work 100% on the TPU (object detection use)?

— Reply to this email directly, view it on GitHub https://github.com/google-coral/edgetpu/issues/868#issuecomment-2408886841, or unsubscribe https://github.com/notifications/unsubscribe-auth/BHJBIOCSTSYXE3F57CH5JFDZ3IXEBAVCNFSM6AAAAABPW5MDNKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBYHA4DMOBUGE . You are receiving this because you commented.Message ID: @.***>

MECHAAMER commented 1 month ago

thank you for your reply and time, I will try it and comeback with results.

MECHAAMER commented 1 month ago

do you recommenced and training parameters for the Mobilenet v2 SSD? is there any link can help me ?

the link you attached recently explain how to use a pre-trained model, I want to train the model with my dataset.

ajbolt69 commented 1 month ago

I’m not very sure about that , I have just used the pretrained model as well , I’ll be working on custom training for the next 2 weeks

On Sun 13. Oct 2024 at 12:55, MECHAAMER @.***> wrote:

do you recommenced and training parameters for the Mobilenet v2 SSD? is there any link can help me ?

the link you attached recently explain how to use a pre-trained model, I want to train the model with my dataset.

— Reply to this email directly, view it on GitHub https://github.com/google-coral/edgetpu/issues/868#issuecomment-2408929702, or unsubscribe https://github.com/notifications/unsubscribe-auth/BHJBIOHKPI7QZHE7VFWYLRDZ3JGRZAVCNFSM6AAAAABPW5MDNKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBYHEZDSNZQGI . You are receiving this because you commented.Message ID: @.***>

MECHAAMER commented 1 month ago

ok thank you, do you recommend use EfficientDet or mobilenet for object detection?

ajbolt69 commented 3 weeks ago

Hey sorry for the late reply, i would recommend the mobilenet since i have used it it uses the edge tpu well and runs at 30fps easily , 20ms inference time per frame , currently working on deploying a custom yolo model to the devboard and Rpi5 , would you like to connect separately? arinjay1402@gmail.com , thats my email