Closed ZelinBobyard closed 19 hours ago
👋 Hello @ZelinBobyard, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.
If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.
Pip install the ultralytics
package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.
pip install ultralytics
YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
@ZelinBobyard hello! Thanks for the detailed report. It seems like you've encountered a known issue with torch.unique(return_counts=True)
on MPS devices, which results in incorrect count values.
As a workaround, you might consider performing this operation on the CPU and then transferring the results back to the MPS device. Here's a quick example:
# Perform unique operation on CPU
wzl_mps = torch.tensor([0.0]).to('mps')
wzl_cpu = wzl_mps.to('cpu')
unique_values, counts = wzl_cpu.unique(return_counts=True)
# Convert back to MPS device if necessary
unique_values_mps = unique_values.to('mps')
counts_mps = counts.to('mps')
print("Unique values:", unique_values_mps)
print("Counts:", counts_mps)
This should provide the correct counts. We appreciate your patience and are looking into a more permanent fix for this issue. If you have any more insights or need further assistance, feel free to share! 🚀
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
Search before asking
YOLOv8 Component
No response
Bug
In the preprocess() function within loss.py:
Environment
Ultralytics YOLOv8.2.17 🚀 Python-3.11.9 torch-2.3.0 CPU (Apple M3 Pro) Setup complete ✅ (12 CPUs, 18.0 GB RAM, 102.5/460.4 GB disk)
OS macOS-14.3-arm64-arm-64bit Environment Darwin Python 3.11.9 Install pip RAM 18.00 GB CPU Apple M3 Pro CUDA None
matplotlib ✅ 3.8.4>=3.3.0 opencv-python ✅ 4.7.0.72>=4.6.0 pillow ✅ 10.3.0>=7.1.2 pyyaml ✅ 6.0.1>=5.3.1 requests ✅ 2.31.0>=2.23.0 scipy ✅ 1.13.0>=1.4.1 torch ✅ 2.3.0>=1.8.0 torchvision ✅ 0.18.0>=0.9.0 tqdm ✅ 4.66.4>=4.64.0 psutil ✅ 5.9.8 py-cpuinfo ✅ 9.0.0 thop ✅ 0.1.1-2209072238>=0.1.1 pandas ✅ 2.2.2>=1.1.4 seaborn ✅ 0.13.2>=0.11.0
Minimal Reproducible Example
Inside the ultralytics.utils.loss.py, the preprocess() function, reproduce the bug by adding:
Additional
No response
Are you willing to submit a PR?