MCG-NJU / MixSort

[ICCV2023] MixSort: The Customized Tracker in SportsMOT
MIT License
69 stars 8 forks source link

File /MixSort/yolox/mixsort_oc_tracker/mixformer.py:103 causes TypeError: expected Tensor as element 0 in argument 0, but got tuple #2

Open SergeySandler opened 1 year ago

SergeySandler commented 1 year ago

_torch.stack(templateimgs).float().div(255) on line 103 in /MixSort/yolox/mixsort_oc_tracker/mixformer.py

template_imgs = normalize(
torch.stack(template_imgs).float().div(255),
self.cfg.DATA.MEAN,
self.cfg.DATA.STD,
)

causes TypeError: expected Tensor as element 0 in argument 0, but got tuple.

I've followed the installation instructions https://github.com/MCG-NJU/MixSort#installation. My code fragment that triggers the problem is as following,

from collections import namedtuple
from yolox.mixsort_oc_tracker.mixsort_oc_tracker import MIXTracker

ArgsTuple = namedtuple("ArgsTuple", ["script", "config", "alpha", "radius", "mix_iou", "local_rank"])
argsTuple = ArgsTuple("mixformer_deit", "track", 0.6, 0, 0.2, 0)
tracker = MIXTracker(det_thresh = 0.5, args = argsTuple, use_byte=False)
vcapture = cv2.VideoCapture("path-to-video")
while(True):
    success, frame = vcapture.read()
    if not success:
        break
    detections = # fill in from a custom YOLOv8 model
    detections = tracker.update(raw_detection, (1,1), (1,1), frame, scale = False) 

The call stack: _File /MixSort/yolox/mixsort_oc_tracker/mixsort_oc_tracker.py:284, in MIXTracker.update(self, output_results, img_info, img_size, img, scale) File /MixSort/yolox/mixsort_oc_tracker/association.py:272, in associate(detections, trackers, iou_threshold, velocities, previous_obs, vdc_weight, img, mixformer, alpha, templates) File /MixSort/yolox/mixsort_oc_tracker/mixformer.py:103, in MixFormer.compute_vitsim(self, detections, trackers, img, templates)

ret-1 commented 1 year ago

Hi,

From the error message you provided, it seems that the template_imgs variable is not a list of torch.Tensor objects as expected. The error indicates that one or more elements in the template_imgs list are tuples, which is causing the torch.stack operation to fail.

To help diagnose the issue, could you please check the value and type of the template_imgs variable right before the normalize() function call? Specifically, the variable is assigned with the following code:

https://github.com/MCG-NJU/MixSort/blob/52d317c1c79860afe705dd2252c4ae754308b75e/yolox/mixsort_oc_tracker/mixformer.py#L90

Please ensure that the templates list contains only torch.Tensor objects and that the indexing operation templates[int(t[-1])] always returns a tensor.

By examining the contents of template_imgs before the normalize() call, we should be able to identify the source of the problem.

SergeySandler commented 1 year ago

@ret-1, templates is a list of tuples, please see below, print statements were added after line 90.

        print(type(templates))           # <class 'list'>
        print(len(templates))            # 20
        print(type(templates[0]))        # <class 'tuple'>
        print(len(templates[0]))         # 2
        print(type(templates[0][0]))     # <class 'torch.Tensor'>
        print(len(templates[0][0]))      # 3
        print(templates[0][0].shape)     # torch.Size([3, 96, 96])
        print(type(templates[0][1]))     # <class 'list'>
        print(len(templates[0][1]))      # 0

Notice the first element in the list is a tuple of a tensor and an empty list.

ret-1 commented 1 year ago

@SergeySandler From your details, the issue seems to originate from the exception handling in https://github.com/MCG-NJU/MixSort/blob/52d317c1c79860afe705dd2252c4ae754308b75e/yolox/mixsort_oc_tracker/mixformer.py#L66-L72 This behavior is actually a simplified adaptation from the handling below where data is marked as invalid if the boxes are too small. The choice to return a zero tensor and an empty list was made because this specific case wasn't encountered during our experiments. https://github.com/MCG-NJU/MixSort/blob/52d317c1c79860afe705dd2252c4ae754308b75e/MixViT/lib/train/data/processing.py#L273-L280

Could you confirm if it aligns with the issue you observed? If so, consider making the necessary adjustments and submitting a pull request to handle such cases more effectively. Thanks for pointing this out.

sky-creater commented 6 months ago

@SergeySandler Have you solved this problem? Could you provide the solution?

SergeySandler commented 6 months ago

@SergeySandler Have you solved this problem? Could you provide the solution?

@sky-creater no unfortunately not solved.