Closed filipwojtasik111 closed 9 months ago
👋 Hello @filipwojtasik111, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.
If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.
Pip install the ultralytics
package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.
pip install ultralytics
YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
@filipwojtasik111 the CoreML export with nms=True
is indeed class-aware. Currently, there isn't an option to change the NMS policy during export within the YOLOv8 repository. However, you could manually adjust the NMS settings in the CoreML model after export or process the predictions post-inference to apply class-agnostic NMS. If you need further assistance, please provide additional details or consider opening a feature request issue for class-agnostic NMS support in CoreML exports. 👍
FYI i was playing with a coreml export format=coreml nms=True
from a yolov9c.pt, opening the .mlpackage in Xcode, using the prediction preview to test (which i use as a quick sanity test), the results were nonsensical. I need to investigate more, might even be expected with the arch
Weird, it's unrelated to v9. Something is happening with the pipeline, when forcing it backwards to coremltools6.2/with NeuralNetwork via format=mlmodel
, works as expected.
Previous deploys that were exported (ML Program → Non Maximum Suppression) without issue, using coremltools 7/ML program.
@glenn-jocher I don't see anything obvious or suspicious going through the changelogs in exporter.py, so it must have regressed somewhere in my ML ops, which is a pretty static. Train/Inference & train/export are done on SM ECR containers, if i had to guess, it's coremltools related since it's a consistent PITA
The earliest working ML Program via ultralytics metadata looks like it was on 8.1.11, which might just be a coincidence of other container changes. All pytorch val/predictions normal, just latest ml program's VNCoreMLRequest
s are coming back with nonsensical observations, confirmed via xcode's quick preview as well.
@rromanchuk thanks for the detailed info! It does sound like the issue might be stemming from coremltools. If it works fine with coremltools 6.2 and you're seeing issues starting from coremltools 7, that's a strong indicator. 🕵️♂️ Since everything else in your pipeline appears to be consistent, I'd recommend focusing on the changes between those coremltools versions.
As a quick test, you might want to manually pin coremltools to 6.2 in one of your SM ECR containers and see if that resolves the issue for a new export. If it does, it could give more weight to the hypothesis that a change in coremltools is causing the unexpected behavior.
pip install coremltools==6.2
It's great that you're keeping an eye on the ultralytics metadata as well; that could indeed provide useful clues. And yes, CoreML tools can sometimes be a bit unpredictable. 😅
Keep me posted on what you find, and if there's anything specific in the coremltools updates that we might need to adjust for on our end, we'll look into it. Good luck! 🍀
Hi @rromanchuk and @glenn-jocher, I’ve been experiencing exactly the same when exporting a trained Yolov8 model with nms=True
. The interesting thing is that if I export the pretrained model instead (say yolov8s.pt), the Xcode preview looks correct; it only seems to look nonsensical when I export a trained model (starting with the pretrained model..).
@pwtm2 hey there! It seems like the issue might be tied specifically to how the NMS is behaving with models post-training. If the pretrained model exports correctly but issues arise post-training, this could be linked to changes in model behavior or output that aren't handled well by the current NMS setup in the CoreML export.
A quick check would be to see if turning nms=False
during export yields a sensible output (albeit without NMS applied), just to confirm if it's indeed related to the NMS process:
yolo export model=your_trained_model.pt format=coreml nms=False
Testing with the above might provide more insight. Keep us posted! 🌟
Hi @glenn-jocher, yes - just as @rromanchuk observed (and I confirm), nms=False
does indeed yield good results. It seems the output is only wrong when nms=True
, and the model has been additionally trained- I find that very strange!
Hi @pwtm2! Thanks for confirming that observation. It does highlight a specific issue with the NMS process in the CoreML export for further trained models. We'll review the export configurations and investigate potential discrepancies introduced during additional training stages. Meanwhile, you can continue using nms=False
for stable outputs and apply NMS externally if necessary. 🛠️ Keep us posted on any new findings, and thanks again for your valuable inputs!
Search before asking
Question
After training and exporting to coreml (nms=True) model did not filter predictions where bboxes had almost the same location but a different label, regardless of the nms_threshold parameter. It looks like nms class-aware not class-agnostic, is that true? If so, there is a way to export model with a different nms policy?
Expample: image with prediction:
predictions: 354: ("AbsuWildBer_WOD_Sma_BUT_700ml_00_00) [0.7348346 , 0.556187 , 0.79887384, 0.6867266 ], confidence: 0.94365996 23: (AbsuKuran_WOD_Sma_BUT_700ml_00_00) [0.7376839 , 0.5555132 , 0.7971864 , 0.6878032 ], confidence:0.6220576
as you can see, the purple bottle has two labels in the same location
Additional
No response