Closed jameslahm closed 1 month ago
@jameslahm hey, can you share more details about
The onnx models of YOLOv10 in Roboflow inference may be incorrectly exported. This PR hopes replace the onnx models of YOLOv10 with the follows: yolov10n.onnx, yolov10s.onnx, yolov10m.onnx, yolov10b.onnx, yolov10l.onnx and yolov10x.onnx.
We used the CLI provided to do the onnx export. Was there some change to the CLI?
@probicheaux did a little experiment below
@jameslahm hey, can you share more details about
The onnx models of YOLOv10 in Roboflow inference may be incorrectly exported. This PR hopes replace the onnx models of YOLOv10 with the follows: yolov10n.onnx, yolov10s.onnx, yolov10m.onnx, yolov10b.onnx, yolov10l.onnx and yolov10x.onnx.
We used the CLI provided to do the onnx export. Was there some change to the CLI?
Although doing a diff on the weights we converted and the provided weights in this issue for yolov10n are different. Running inference currently produce the exact same predictions.
@NickHerrig We found that the results of YOLOv10-M on the vehicles.png are different. Could you please verify this in your local environment? Thanks a lot!
yeah absolutely! @jameslahm
@NickHerrig We found that the results of YOLOv10-M on the vehicles.png are different. Could you please verify this in your local environment? Thanks a lot!
Same experience dropping in the provided weights and the hosted weights.
Below is the code snippet I'm using, and manually replacing the weights in /tmp/cache/coco/{version}/weights.onnx
from inference_sdk import InferenceHTTPClient
from inference import get_model
import supervision as sv
import cv2
BOUNDING_BOX_ANNOTATORS = sv.BoundingBoxAnnotator()
LABEL_ANNOTATORS = sv.LabelAnnotator(text_color=sv.Color.black())
img = cv2.imread("people-walking.png")
model = get_model("coco/21")
results = model.infer("people-walking.png", confidence=.2, iou_threshold=.7 )[0]
detections = sv.Detections.from_inference(results)
labels = [
f"{class_name} ({confidence:.2f})"
for class_name, confidence
in zip(detections['class_name'], detections.confidence)
]
annotated_image = BOUNDING_BOX_ANNOTATORS.annotate(
scene=img, detections=detections)
annotated_image = LABEL_ANNOTATORS.annotate(
scene=annotated_image, detections=detections, labels=labels)
cv2.imshow("Annotated Image", annotated_image)
cv2.imwrite("new.png", annotated_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
@NickHerrig Thanks! It seems that the model id of YOLOv10-M
is coco/22
. We obtain different results by using the provided weights and the hosted weights after incorporating this PR.
Hosted yolvo10m weights
Provided yolov10m weights
The code is similar to yours
from inference import get_model
import supervision as sv
import cv2
from PIL import Image
BOUNDING_BOX_ANNOTATORS = sv.BoundingBoxAnnotator()
LABEL_ANNOTATORS = sv.LabelAnnotator(text_color=sv.Color.black())
img = cv2.imread("vehicles.png")
model = get_model("coco/22")
results = model.infer("vehicles.png", confidence=.2, iou_threshold=.7 )[0]
detections = sv.Detections.from_inference(results)
labels = [
f"{class_name} ({confidence:.2f})"
for class_name, confidence
in zip(detections['class_name'], detections.confidence)
]
annotated_image = BOUNDING_BOX_ANNOTATORS.annotate(
scene=img, detections=detections)
annotated_image = LABEL_ANNOTATORS.annotate(
scene=annotated_image, detections=detections, labels=labels)
cv2.imwrite("new.png", annotated_image)
@NickHerrig @probicheaux After investigation, we found that the reason is that the weights of YOLOv10-M and YOLOv10-B are reversely hosted. The results of all YOLOv10 variants are below. It can be observed that the results of hosted YOLOv10-M and provided YOLOv10-B are the same, and the results of hosted YOLOv10-B and provided YOLOv10-M are the same. Could you please fix this? Thank you!
Hosted yolvo10n weights (model_id=coco/19
)
Hosted yolvo10s weights (model_id=coco/20
)
Hosted yolvo10m weights (model_id=coco/22
)
Hosted yolvo10b weights (model_id=coco/21
)
Hosted yolvo10l weights (model_id=coco/23
)
Hosted yolvo10x weights (model_id=coco/24
)
Provided yolvo10n weights
Provided yolvo10s weights
Provided yolvo10m weights
Provided yolvo10b weights
Provided yolvo10l weights
Provided yolvo10x weights
@jameslahm Thank you so much for your contributions and thoroughness.
I ran the experiment above and am still not able to replicate the findings, but I believe I see the root of the confusion. Please confirm that you are using "coco/21" for "yolov10m" and "coco/22" for "yolov10b".
I think that you may be mixing up our coco versions with the "aliases". Below is mapping of the aliases to the coco version
models = {
"yolov10n": "coco/19",
"yolov10s": "coco/20",
"yolov10m": "coco/21",
"yolov10b": "coco/22",
"yolov10l": "coco/23",
"yolov10x": "coco/24",
}
On this PR, I am able to recreate the above confidences though, with these aliases in mind both on the hosted and provided weights. So I believe our weights are the same at the moment.
@NickHerrig Thanks for your great efforts and help! But when we see the model_type.json
in /tmp/cache/coco/22
, it shows yolov10m
in the model_type
. Do we miss something? Thanks!
@jameslahm great finding! Okay, so the weights files are correct, but the model_type.json has a typo! I can fix that!
@NickHerrig Thank you!
@jameslahm great finding! Okay, so the weights files are correct, but the model_type.json has a typo! I can fix that!
@jameslahm first of all thank you much for your contribution!
@NickHerrig re:
@jameslahm great finding! Okay, so the weights files are correct, but the model_type.json has a typo! I can fix that!
I suspect that file is autogenerated by the roboflow API and the problem is an incorrect record in firestore, can you verify?
Do I need to do anything else to help with the merge? Please feel free to ping me if there's anything I can assist with 😊.
@jameslahm sorry, I was out sick. Unfortunately, looks like we're having some trouble with the merge process: some tests weren't running properly and it looks like style failed. I figured it'd just be easier to get this in as a branch here: https://github.com/roboflow/inference/pull/453
Merged! thanks for your contribution @jameslahm
@NickHerrig We notice that the model_type
is yolov10m
in the /tmp/cache/coco/22/model_type.json
and the format
of resize is Fit (black edges) in
in /tmp/cache/coco/22/environment.json
. Could you please fix these? Thanks a lot!
@jameslahm thanks for taking a look into that. Your problem should be resolved. You should clear /tmp/cache/coco
to verifiy
Description
color
used in LetterBox is(114,114,114)
in ultralytics. This PR change thecolor
to(114,114,114)
to make LetterBox consistent with it.Type of change
Please delete options that are not relevant.
How has this change been tested, please provide a testcase or example of how you tested the change?
Here is a minimal testcase:
The
annotated_vehicles.png
should be like belowAny specific deployment considerations
For example, documentation changes, usability, usage/costs, secrets, etc.
Docs