Closed mmr689 closed 6 months ago
π Hello @mmr689, thank you for your interest in Ultralytics YOLOv8 π! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.
If this is a π Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training β Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Join the vibrant Ultralytics Discord π§ community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.
Pip install the ultralytics
package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.
pip install ultralytics
YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
@mmr689 hello! π
It looks like you're on the right track with using the tflite_runtime
library for deploying your YOLOv8 model on the Coral Dev Board. The issue with the wrong boxes might be due to how the output data is being interpreted.
The output of YOLO models typically includes bounding box coordinates normalized between 0 and 1, class probabilities, and objectness scores. It's crucial to correctly map these outputs to your image dimensions and apply the correct threshold to filter out low-confidence detections.
Here's a quick suggestion to adjust your code:
Ensure the output tensor indices in output_details
match the expected outputs of your model. YOLO models usually have multiple outputs for boxes, objectness scores, and class probabilities. You might be accessing only one output tensor.
The order of the box coordinates might be different than expected. YOLO models usually output boxes in the format [x_center, y_center, width, height]
, which you'll need to convert to [y1, x1, y2, x2]
format.
Apply a more realistic threshold for scores
to filter detections, something like 0.25
or higher, depending on your use case.
Here's a modified snippet for the box conversion and thresholding part:
threshold = 0.25 # Adjust based on your model's performance
for i in range(len(filtered_boxes)):
box = filtered_boxes[i]
class_id = int(filtered_classes[i])
score = filtered_scores[i]
# Convert from [x_center, y_center, width, height] to [y1, x1, y2, x2]
x_center, y_center, box_width, box_height = box
x1 = int((x_center - box_width / 2) * frame.shape[1])
y1 = int((y_center - box_height / 2) * frame.shape[0])
x2 = int((x_center + box_width / 2) * frame.shape[1])
y2 = int((y_center + box_height / 2) * frame.shape[0])
# Drawing and labeling logic remains the same
Make sure to adjust the indices for accessing the outputs from output_details
based on your model's specific output format. This might require some trial and error or reviewing the model's documentation to get right.
Hope this helps! Let us know if you have further questions. π
Search before asking
Question
Hello everyone,
What I want to do is run my YOLOv8 model on a Coral Dev Board. Because of this, I need to export the model into an int8 TFLite model.
Convert it from
.pt
to.TFLite
it's easy with the Ultralytics docs.Using the
.TFLite
model withYOLO
from the Ultralytics library is also easy.But, how can I use the
.TFLite
model withtflite_runtime
library? I mean, with the following code, I can run my model and draw some boxes in my image, but they are obviously wrong.The code to create this image is:
I can't find to much information on how to do this. Can anyone help me?
Thanks a lot.
Additional
No response