open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.55k stars 9.46k forks source link

UT failed: test_yolox_random_size #6863

Closed del-zhenwu closed 2 years ago

del-zhenwu commented 2 years ago

Thanks for your error report and we appreciate it a lot.

Checklist

  1. I have searched related issues but cannot get the expected help.
  2. I have read the FAQ documentation but cannot get the expected help.
  3. The bug has not been fixed in the latest version.

Describe the bug

[2021-12-22T13:34:20.036Z] ____________________________ test_yolox_random_size ____________________________

[2021-12-22T13:34:20.036Z] 

[2021-12-22T13:34:20.036Z]     @pytest.mark.skipif(

[2021-12-22T13:34:20.036Z]         not torch.cuda.is_available(), reason='requires CUDA support')

[2021-12-22T13:34:20.036Z]     def test_yolox_random_size():

[2021-12-22T13:34:20.036Z]         from mmdet.models import build_detector

[2021-12-22T13:34:20.036Z]         model = _get_detector_cfg('yolox/yolox_tiny_8x8_300e_coco.py')

[2021-12-22T13:34:20.036Z]         model.random_size_range = (2, 2)

[2021-12-22T13:34:20.036Z]         model.input_size = (64, 96)

[2021-12-22T13:34:20.036Z]         model.random_size_interval = 1

[2021-12-22T13:34:20.036Z]     

[2021-12-22T13:34:20.036Z]         detector = build_detector(model)

[2021-12-22T13:34:20.036Z]         input_shape = (1, 3, 64, 64)

[2021-12-22T13:34:20.036Z]         mm_inputs = _demo_mm_inputs(input_shape)

[2021-12-22T13:34:20.036Z]     

[2021-12-22T13:34:20.036Z]         imgs = mm_inputs.pop('imgs')

[2021-12-22T13:34:20.036Z]         img_metas = mm_inputs.pop('img_metas')

[2021-12-22T13:34:20.036Z]     

[2021-12-22T13:34:20.036Z]         # Test forward train with non-empty truth batch

[2021-12-22T13:34:20.036Z]         detector.train()

[2021-12-22T13:34:20.036Z]         gt_bboxes = mm_inputs['gt_bboxes']

[2021-12-22T13:34:20.036Z]         gt_labels = mm_inputs['gt_labels']

[2021-12-22T13:34:20.036Z]         detector.forward(

[2021-12-22T13:34:20.036Z]             imgs,

[2021-12-22T13:34:20.036Z]             img_metas,

[2021-12-22T13:34:20.036Z]             gt_bboxes=gt_bboxes,

[2021-12-22T13:34:20.036Z]             gt_labels=gt_labels,

[2021-12-22T13:34:20.036Z]             return_loss=True)

[2021-12-22T13:34:20.036Z]         detector.forward(

[2021-12-22T13:34:20.036Z]             imgs,

[2021-12-22T13:34:20.036Z]             img_metas,

[2021-12-22T13:34:20.037Z]             gt_bboxes=gt_bboxes,

[2021-12-22T13:34:20.037Z]             gt_labels=gt_labels,

[2021-12-22T13:34:20.037Z]             return_loss=True)

[2021-12-22T13:34:20.037Z] >       assert detector._input_size == (64, 64)

[2021-12-22T13:34:20.037Z] E       assert (64, 96) == (64, 64)

[2021-12-22T13:34:20.037Z] E         At index 1 diff: 96 != 64

[2021-12-22T13:34:20.037Z] E         Use -v to get the full diff

[2021-12-22T13:34:20.037Z] 

[2021-12-22T13:34:20.037Z] tests/test_models/test_forward.py:707: AssertionError

Reproduction

  1. What command or script did you run?
A placeholder for the command.
  1. Did you make any modifications on the code or config? Did you understand what you have modified?
  2. What dataset did you use?

Environment ci image: ubuntu_1804_py_37_cuda_101_cudnn_7_torch_160 mmcv version: v1.4.1 code branch: master

  1. Please run python mmdet/utils/collect_env.py to collect necessary environment information and paste it here.
  2. You may add addition that may be helpful for locating the problem, such as
    • How you installed PyTorch [e.g., pip, conda, source]
    • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

Error traceback If applicable, paste the error trackback here.

A placeholder for trackback.

Bug fix If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

hhaAndroid commented 2 years ago

@del-zhenwu I will fix it.