open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.12k stars 9.38k forks source link

NaN outputs in YOLOX model using BYTETRACK example #10862

Open mmeendez8 opened 1 year ago

mmeendez8 commented 1 year ago

Describe the bug

I am trying to rum tracking with bytetrack model that uses YOLOX but I am not able to get it to work. Reproduction

  1. What command or script did you run?
python demo/mot_demo.py \
    demo/demo_mot.mp4 \
    configs/bytetrack/bytetrack_yolox_x_8xb4-80e_crowdhuman-mot17halftrain_test-mot17halfval.py \
    --out output/bytetrack.mp4

Environment

sys.platform: linux
Python: 3.10.9 (main, Jan 11 2023, 15:21:40) [GCC 11.2.0]
CUDA available: True
numpy_random_seed: 2147483648
GPU 0: NVIDIA GeForce RTX 3070 Laptop GPU
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.1, V11.1.74
GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
PyTorch: 2.0.1+cu118
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 11.8
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 8.7
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.8, CUDNN_VERSION=8.7.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, 

TorchVision: 0.15.2+cu118
OpenCV: 4.8.0
MMEngine: 0.8.3
MMDetection: 3.1.0+f07cc98

Error traceback

/home/mmendez/.cache/pypoetry/virtualenvs/mmtracking-test-Nyj792Dd-py3.10/lib/python3.10/site-packages/mmengine/visualization/visualizer.py:746: UserWarning: Warning: The bbox is out of bounds, the drawn bbox may not be in the image
  warnings.warn(
/home/mmendez/.cache/pypoetry/virtualenvs/mmtracking-test-Nyj792Dd-py3.10/lib/python3.10/site-packages/mmengine/visualization/visualizer.py:817: UserWarning: Warning: The polygon is out of bounds, the drawn polygon may not be in the image
  warnings.warn(
/home/mmendez/.cache/pypoetry/virtualenvs/mmtracking-test-Nyj792Dd-py3.10/lib/python3.10/site-packages/mmdet/visualization/palette.py:90: RuntimeWarning: invalid value encountered in floor_divide
  scales = 0.5 + (areas - min_area) // (max_area - min_area)
Traceback (most recent call last):
  File "/home/mmendez/work/mmtracking_test/demo/mot_demo.py", line 130, in <module>
    main(args)
  File "/home/mmendez/work/mmtracking_test/demo/mot_demo.py", line 109, in main
    visualizer.add_datasample(
  File "/home/mmendez/.cache/pypoetry/virtualenvs/mmtracking-test-Nyj792Dd-py3.10/lib/python3.10/site-packages/mmengine/dist/utils.py", line 401, in wrapper
    return func(*args, **kwargs)
  File "/home/mmendez/.cache/pypoetry/virtualenvs/mmtracking-test-Nyj792Dd-py3.10/lib/python3.10/site-packages/mmdet/visualization/local_visualizer.py", line 684, in add_datasample
    pred_img_data = self._draw_instances(image, pred_instances)
  File "/home/mmendez/.cache/pypoetry/virtualenvs/mmtracking-test-Nyj792Dd-py3.10/lib/python3.10/site-packages/mmdet/visualization/local_visualizer.py", line 607, in _draw_instances
    font_sizes=int(13 * scales[i]),
ValueError: cannot convert float NaN to integer
mmeendez8 commented 1 year ago

I just debugged model output and I get the following:

print(result.video_data_samples[0].pred_instances.bboxes)
tensor([[-1.9546e+09, -5.3129e+09, -1.9546e+09, -5.3129e+09],
        [       -inf, -7.4956e+09,         inf, -7.4956e+09],
        [       -inf,  1.6434e+09,         inf,  1.6434e+09],
        [ 4.2734e+09,  1.9240e+09,  4.2734e+09,  1.9240e+09]], device='cuda:0')
mmeendez8 commented 1 year ago

Some more comments, I just got confused and thought the weight path of the detection model was burned in the config but I need to manually attach it. Everything works if you just use the proper command:

python demo/mot_demo.py \
    demo/demo_mot.mp4 \
    configs/bytetrack/bytetrack_yolox_x_8xb4-80e_crowdhuman-mot17halftrain_test-mot17halfval.py \
    --checkpoint https://download.openmmlab.com/mmtracking/mot/bytetrack/bytetrack_yolox_x/bytetrack_yolox_x_crowdhuman_mot17-private-half_20211218_205500-1985c9f0.pth
    --out output/bytetrack.mp4 \

But problems arise with SORT style trackers where you have --detector and --reid flags. I get NaN errors again... it seems the model is being loaded in a different way but I cannot really see what is going on.

This command works without any trouble:

python demo/mot_demo.py \
    demo/demo.mp4 \
    configs/sort/sort_yolox_x_8xb4-80e_crowdhuman-mot17halftrain_test-mot17halfval.py \
    --checkpoint https://download.openmmlab.com/mmtracking/mot/bytetrack/bytetrack_yolox_x/bytetrack_yolox_x_crowdhuman_mot17-private-half_20211218_205500-1985c9f0.pth \
    --out output/sort_yolox.mp4

But I try to do the same using the --detector flag I got NaN errors again because the model output is NaN:

python demo/mot_demo.py \
    demo/demo.mp4 \
    configs/sort/sort_yolox_x_8xb4-80e_crowdhuman-mot17halftrain_test-mot17halfval.py \
    --detector https://download.openmmlab.com/mmtracking/mot/bytetrack/bytetrack_yolox_x/bytetrack_yolox_x_crowdhuman_mot17-private-half_20211218_205500-1985c9f0.pth \
    --out output/sort_yolox.mp4

This is the config file I am using for SORT, just modified it to add YOLOX as the detector:

_base_ = ['../yolox/yolox_x_8xb8-300e_coco.py'] # same as mmdet one

dataset_type = 'MOTChallengeDataset'
data_root = 'data/MOT17/'

img_scale = (1440, 800)  # weight, height
batch_size = 4

detector = _base_.model
detector.pop('data_preprocessor')
detector.bbox_head.update(dict(num_classes=1))
detector.test_cfg.nms.update(dict(iou_threshold=0.7))
detector['init_cfg'] = dict(
    type='Pretrained',
    checkpoint=  # noqa: E251
    'https://download.openmmlab.com/mmdetection/v2.0/yolox/yolox_x_8x8_300e_coco/yolox_x_8x8_300e_coco_20211126_140254-1ef88d67.pth'  # noqa: E501
)
del _base_.model

model = dict(
    type='DeepSORT',
    data_preprocessor=dict(
        type='TrackDataPreprocessor',
        pad_size_divisor=32,),
    detector=detector,
    tracker=dict(
        type='SORTTracker',
        motion=dict(type='KalmanFilter', center_only=False),
        obj_score_thr=0.5,
        match_iou_thr=0.5,
        reid=None))

test_pipeline = [
    dict(
        type='TransformBroadcaster',
        transforms=[
            dict(type='LoadImageFromFile', backend_args=_base_.backend_args),
            dict(type='Resize', scale=img_scale, keep_ratio=True),
            dict(
                type='Pad',
                size_divisor=32,
                pad_val=dict(img=(114.0, 114.0, 114.0))),
            dict(type='LoadTrackAnnotations'),
        ]),
    dict(type='PackTrackInputs')
]

val_dataloader = dict(
    _delete_=True,
    batch_size=1,
    num_workers=2,
    persistent_workers=True,
    pin_memory=True,
    drop_last=False,
    # video_based
    # sampler=dict(type='DefaultSampler', shuffle=False, round_up=False),
    sampler=dict(type='TrackImgSampler'),  # image_based
    dataset=dict(
        type=dataset_type,
        data_root=data_root,
        ann_file='annotations/half-val_cocoformat.json',
        data_prefix=dict(img_path='train'),
        test_mode=True,
        pipeline=test_pipeline))
test_dataloader = val_dataloader

vis_backends = [dict(type='LocalVisBackend')]
visualizer = dict(
    type='TrackLocalVisualizer', vis_backends=vis_backends, name='visualizer')

del detector