ultralytics / ultralytics

Ultralytics YOLO11 🚀
https://docs.ultralytics.com
GNU Affero General Public License v3.0
31.9k stars 6.11k forks source link

boxes is not none but boxes.int is none #17280

Open quandn2003 opened 1 day ago

quandn2003 commented 1 day ago

Search before asking

Ultralytics YOLO Component

Predict

Bug

There are some problems when I make tracking, the error log: image

I have some print command for debug: image

Speed: 4.9ms preprocess, 73.9ms inference, 2.2ms postprocess per image at shape (1, 3, 736, 1280) ultralytics.engine.results.Boxes object with attributes:

cls: tensor([2., 1., 1., 1., 2., 1., 1.], device='cuda:0') conf: tensor([0.7524, 0.7012, 0.6504, 0.6207, 0.5408, 0.3576, 0.3310], device='cuda:0') data: tensor([[1.7132e+03, 3.6145e+02, 1.8265e+03, 5.4264e+02, 7.5240e-01, 2.0000e+00], [1.4597e+03, 2.6598e+02, 1.5244e+03, 3.9049e+02, 7.0118e-01, 1.0000e+00], [7.5962e+02, 2.4305e+02, 7.9756e+02, 3.3320e+02, 6.5044e-01, 1.0000e+00], [1.6534e+03, 8.8274e+01, 1.6896e+03, 1.8568e+02, 6.2074e-01, 1.0000e+00], [1.8074e+03, 1.3880e+02, 1.8454e+03, 2.4049e+02, 5.4076e-01, 2.0000e+00], [1.7687e+03, 2.0319e+02, 1.8388e+03, 2.8379e+02, 3.5755e-01, 1.0000e+00], [1.6781e+03, 8.9673e+01, 1.6981e+03, 1.5320e+02, 3.3101e-01, 1.0000e+00]], device='cuda:0') id: None is_track: False orig_shape: (1080, 1920) shape: torch.Size([7, 6]) xywh: tensor([[1769.8567, 452.0468, 113.2866, 181.1950], [1492.0507, 328.2383, 64.7390, 124.5082], [ 778.5859, 288.1222, 37.9395, 90.1532], [1671.5173, 136.9755, 36.2457, 97.4032], [1826.3965, 189.6457, 38.0559, 101.6830], [1803.7438, 243.4910, 70.1233, 80.6038], [1688.1108, 121.4385, 20.0078, 63.5317]], device='cuda:0') xywhn: tensor([[0.9218, 0.4186, 0.0590, 0.1678], [0.7771, 0.3039, 0.0337, 0.1153], [0.4055, 0.2668, 0.0198, 0.0835], [0.8706, 0.1268, 0.0189, 0.0902], [0.9512, 0.1756, 0.0198, 0.0942], [0.9394, 0.2255, 0.0365, 0.0746], [0.8792, 0.1124, 0.0104, 0.0588]], device='cuda:0') xyxy: tensor([[1713.2134, 361.4493, 1826.5000, 542.6443], [1459.6812, 265.9842, 1524.4202, 390.4924], [ 759.6162, 243.0456, 797.5557, 333.1988], [1653.3944, 88.2739, 1689.6401, 185.6771], [1807.3685, 138.8042, 1845.4244, 240.4872], [1768.6821, 203.1891, 1838.8054, 283.7929], [1678.1069, 89.6726, 1698.1147, 153.2043]], device='cuda:0') xyxyn: tensor([[0.8923, 0.3347, 0.9513, 0.5024], [0.7603, 0.2463, 0.7940, 0.3616], [0.3956, 0.2250, 0.4154, 0.3085], [0.8611, 0.0817, 0.8800, 0.1719], [0.9413, 0.1285, 0.9612, 0.2227], [0.9212, 0.1881, 0.9577, 0.2628], [0.8740, 0.0830, 0.8844, 0.1419]], device='cuda:0') None

So why results[0].boxes is not none but results[0].id is None, and is_track is false ?

P/S: The error just happen in one specific frame, there are no error in previous frame

Environment

Ultralytics 8.3.3 🚀 Python-3.10.12 torch-2.4.1+cu121 CUDA:0 (NVIDIA GeForce RTX 3060 Laptop GPU, 6144MiB) Setup complete ✅ (12 CPUs, 6.7 GB RAM, 99.2/1006.9 GB disk)

OS Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 Environment Linux Python 3.10.12 Install pip RAM 6.70 GB CPU AMD Ryzen 5 5600H with Radeon Graphics CUDA 12.1

numpy ✅ 1.26.4<2.0.0,>=1.23.0 matplotlib ✅ 3.9.2>=3.3.0 opencv-python ✅ 4.10.0.84>=4.6.0 pillow ✅ 10.4.0>=7.1.2 pyyaml ✅ 6.0.2>=5.3.1 requests ✅ 2.32.3>=2.23.0 scipy ✅ 1.14.1>=1.4.1 torch ✅ 2.4.1>=1.8.0 torchvision ✅ 0.19.1>=0.9.0 tqdm ✅ 4.66.5>=4.64.0 psutil ✅ 6.0.0 py-cpuinfo ✅ 9.0.0 pandas ✅ 2.2.3>=1.1.4 seaborn ✅ 0.13.2>=0.11.0 ultralytics-thop ✅ 2.0.8>=2.0.0 torch ✅ 2.4.1!=2.4.0,>=1.8.0; sys_platform == "win32"

Minimal Reproducible Example

while cap.isOpened(): success, frame = cap.read() if success: results = model_player.track(frame, persist=True, tracker="bytetrack.yaml", conf=0.2, iou=0.5, device=device) boxes = results[0].boxes.xywh.cpu() print(results[0].boxes) print(results[0].boxes.id) track_ids = results[0].boxes.id.int().cpu().tolist() classes = results[0].boxes.cls.int().cpu().tolist() # Get the class indices

Additional

No response

Are you willing to submit a PR?

UltralyticsAssistant commented 1 day ago

👋 Hello @quandn2003, thank you for your interest in Ultralytics 🚀! We recommend a visit to the Docs for new users, where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it. While you have shared a section of the code, providing a full context could aid in pinpointing the issue more efficiently.

Regarding your tracking issue, it seems you're encountering a problem with the tracking ID being None for a specific frame. This could be due to several reasons related to the tracker configuration or the specific properties of that frame. Ensure your tracker settings are correctly configured and possibly compare this specific frame with others that work.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the Ultralytics community where it suits you best. For real-time chat, head to Discord 🎧. Prefer in-depth discussions? Check out Discourse. Or dive into threads on our Subreddit to share knowledge with the community.

Upgrade

Upgrade to the latest ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8 to verify your issue is not already resolved in the latest version:

pip install -U ultralytics

Environments

YOLO may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLO Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

This is an automated response 🌟. An Ultralytics engineer will review your issue shortly and provide further assistance.

RizwanMunawar commented 1 day ago

@quandn2003 Hi, thanks for raising this issue. As you already mentioned, 'the error only occurred in one specific frame, with no issues in previous frames,' which suggests that the object wasn't tracked in that frame. This could be due to low confidence, tracking settings, or similar factors.

Using the mentioned code is likely the safest long-term solution to prevent this issue.

results = model.track(im0, persist=True)

if results[0].boxes.id is not None:
    boxes = results[0].boxes.xyxy.cpu()
    track_ids = results[0].boxes.id.int().cpu().tolist()

    print(boxes)
    print(track_ids)

   # loop over each box, ...
   for box, t_id in enumerate(boxes, track_ids):

      # additional logic

I hope this helps. Thank you!