Banconxuan / RTM3D

The official PyTorch Implementation of RTM3D and KM3D for Monocular 3D Object Detection
MIT License
453 stars 85 forks source link

Can't parse 'pt1'. Sequence item with index 0 has a wrong type #47

Closed thedevleon closed 3 years ago

thedevleon commented 3 years ago

Been trying to get this to run, and jumping through a couple of hoops because I'm using a RTX 2070 Super - which requires at least CUDA 10. Setting up the conda environment with the following allowed me to build DCNv2 as well as iou3d, however when trying out the demo, I get the following error:

python ./src/faster.py --vis --demo ./demo_kitti_format/data/kitti/image --calib_dir ./demo_kitti_format/data/kitti/calib --load_model ./demo_kitti_format/exp/KM3D/model_res18_1.pth --gpus 0 --arch res_18

Fix size testing.
training chunk_sizes: [32]
The output will be saved to  exp/default
heads {'hm': 3, 'wh': 2, 'hps': 18, 'rot': 8, 'dim': 3, 'prob': 1, 'reg': 2, 'hm_hp': 9, 'hp_offset': 2}
Creating model...
=> loading pretrained model https://download.pytorch.org/models/resnet18-5c106cde.pth
./demo_kitti_format/exp/KM3D/model_res18_1.pth
loaded ./demo_kitti_format/exp/KM3D/model_res18_1.pth, epoch 199
Drop parameter hm_hp.0.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter hm_hp.0.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter hm_hp.2.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter hm_hp.2.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter hp_offset.0.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter hp_offset.0.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter hp_offset.2.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter hp_offset.2.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter reg.0.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter reg.0.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter reg.2.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter reg.2.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter wh.0.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter wh.0.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter wh.2.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter wh.2.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
corners:
[[506.05777 370.8125 ]
 [346.08633 357.84247]
 [512.6833  291.99506]
 [620.54315 297.24396]
 [506.05777 174.95726]
 [346.08633 174.81946]
 [512.6833  174.11986]
 [620.54315 174.17561]
 [510.1939  248.02145]]
Traceback (most recent call last):
  File "./src/faster.py", line 55, in <module>
    demo(opt)
  File "./src/faster.py", line 46, in demo
    ret = detector.run(image_name)
  File "/home/leon/Desktop/avular/testing/RTM3D_KM3D/src/lib/detectors/base_detector.py", line 163, in run
    self.show_results(debugger, image, results, calib_numpy)
  File "/home/leon/Desktop/avular/testing/RTM3D_KM3D/src/lib/detectors/car_pose.py", line 110, in show_results
    debugger.add_3d_detection(bbox, calib, img_id='car_pose')
  File "/home/leon/Desktop/avular/testing/RTM3D_KM3D/src/lib/utils/debugger.py", line 472, in add_3d_detection
    self.imgs[img_id] = draw_box_3d(self.imgs[img_id], box_2d, cl)
  File "/home/leon/Desktop/avular/testing/RTM3D_KM3D/src/lib/utils/ddd_utils.py", line 110, in draw_box_3d
    (corners[f[(j+1)%4], 0], corners[f[(j+1)%4], 1]), c, 2, lineType=cv2.LINE_AA)
cv2.error: OpenCV(4.5.3) :-1: error: (-5:Bad argument) in function 'line'
> Overload resolution failed:
>  - Can't parse 'pt1'. Sequence item with index 0 has a wrong type
>  - Can't parse 'pt1'. Sequence item with index 0 has a wrong type

Maybe this has to do with my version of OpenCV and changed API calls? If yes, what version of OpenCV was used originally?

I also added a print for the corners, maybe that helps to figure out the issue?

The error seems to be on this line: https://github.com/Banconxuan/RTM3D/blob/888c379e79d8a6d134f06a9b7d669118679e06dc/src/lib/utils/ddd_utils.py#L104

morrolinux commented 3 years ago

It does. It might seem unrelated but which version of torch and torchvision are you using? can you train the model correctly?

Dosimz commented 3 years ago

I solved it like this ...

def draw_box_3d(image, corners, c=(0, 0, 255)):
  face_idx = [[0,1,5,4],
              [1,2,6, 5],
              [2,3,7,6],
              [3,0,4,7]]
  for ind_f in range(3, -1, -1):
    f = face_idx[ind_f]
    for j in range(4):
      cv2.line(image, (int(corners[f[j], 0]), int(corners[f[j], 1])),
               (int(corners[f[(j+1)%4], 0]), int(corners[f[(j+1)%4], 1])), c, 2, lineType=cv2.LINE_AA)
    if ind_f == 0:
      cv2.line(image, (int(corners[f[0], 0]), int(corners[f[0], 1])),
               (int(corners[f[2], 0]), int(corners[f[2], 1])), c, 1, lineType=cv2.LINE_AA)
      cv2.line(image, (int(corners[f[1], 0]), int(corners[f[1], 1])),
               (int(corners[f[3], 0]), int(corners[f[3], 1])), c, 1, lineType=cv2.LINE_AA)
  return image

Convert pt1 from float point type to int

cv.line(img, pt1, pt2, color,thickness,lineType)

https://docs.opencv.org/4.5.3/d6/d6e/group__imgproc__draw.html#ga7078a9fae8c7e7d13d24dac2520ae4a2 pt1 --- Point --- int https://docs.opencv.org/4.5.3/dc/d84/group__core__basic.html#ga1e83eafb2d26b3c93f09e8338bcab192

morrolinux commented 3 years ago

Could you share your conda list output? I'm having a weird issue in training after re-installing the project and I suspect it's due to a non fixed package version which broke compatibility.

morrolinux commented 3 years ago

Also I was able to sove the OP issue by installing a fixed (previous) version of opencv via pip install opencv-python==4.0.0.21

thedevleon commented 3 years ago

I solved it like this ...

def draw_box_3d(image, corners, c=(0, 0, 255)):
  face_idx = [[0,1,5,4],
              [1,2,6, 5],
              [2,3,7,6],
              [3,0,4,7]]
  for ind_f in range(3, -1, -1):
    f = face_idx[ind_f]
    for j in range(4):
      cv2.line(image, (int(corners[f[j], 0]), int(corners[f[j], 1])),
               (int(corners[f[(j+1)%4], 0]), int(corners[f[(j+1)%4], 1])), c, 2, lineType=cv2.LINE_AA)
    if ind_f == 0:
      cv2.line(image, (int(corners[f[0], 0]), int(corners[f[0], 1])),
               (int(corners[f[2], 0]), int(corners[f[2], 1])), c, 1, lineType=cv2.LINE_AA)
      cv2.line(image, (int(corners[f[1], 0]), int(corners[f[1], 1])),
               (int(corners[f[3], 0]), int(corners[f[3], 1])), c, 1, lineType=cv2.LINE_AA)
  return image

Convert pt1 from float point type to int

cv.line(img, pt1, pt2, color,thickness,lineType)

https://docs.opencv.org/4.5.3/d6/d6e/group__imgproc__draw.html#ga7078a9fae8c7e7d13d24dac2520ae4a2 pt1 --- Point --- int https://docs.opencv.org/4.5.3/dc/d84/group__core__basic.html#ga1e83eafb2d26b3c93f09e8338bcab192

Thanks, that seemed to have fixed it for me.

For reference, here is my current conda environment:

name: KM3D
channels:
  - pytorch
  - anaconda
  - defaults
dependencies:
  - _libgcc_mutex=0.1=main
  - _openmp_mutex=4.5=1_gnu
  - blas=1.0=openblas
  - ca-certificates=2021.7.5=h06a4308_1
  - certifi=2021.5.30=py36h06a4308_0
  - cffi=1.14.6=py36h400218f_0
  - cudatoolkit=10.0.130=0
  - freetype=2.10.4=h5ab3b9f_0
  - intel-openmp=2021.3.0=h06a4308_3350
  - jpeg=9d=h7f8727e_0
  - lcms2=2.12=h3be6417_0
  - ld_impl_linux-64=2.35.1=h7274673_9
  - libffi=3.3=he6710b0_2
  - libgcc-ng=9.3.0=h5101ec6_17
  - libgfortran-ng=7.5.0=ha8ba4b0_17
  - libgfortran4=7.5.0=ha8ba4b0_17
  - libgomp=9.3.0=h5101ec6_17
  - libopenblas=0.3.13=h4367d64_0
  - libpng=1.6.37=hbc83047_0
  - libstdcxx-ng=9.3.0=hd4cf53a_17
  - libtiff=4.2.0=h85742a9_0
  - libwebp-base=1.2.0=h27cfd23_0
  - lz4-c=1.9.3=h295c915_1
  - mkl=2020.2=256
  - ncurses=6.2=he6710b0_1
  - ninja=1.10.2=hff7bd54_1
  - numpy=1.17.0=py36h99e49ec_0
  - numpy-base=1.17.0=py36h2f8d375_0
  - olefile=0.46=py36_0
  - openjpeg=2.4.0=h3ad879b_0
  - openssl=1.1.1l=h7f8727e_0
  - pillow=8.3.1=py36h2c7a002_0
  - pip=21.0.1=py36h06a4308_0
  - pycparser=2.20=py_2
  - python=3.6.13=h12debd9_1
  - pytorch=1.2.0=py3.6_cuda10.0.130_cudnn7.6.2_0
  - readline=8.1=h27cfd23_0
  - setuptools=58.0.4=py36h06a4308_0
  - six=1.16.0=pyhd3eb1b0_0
  - sqlite=3.36.0=hc218d9a_0
  - tk=8.6.10=hbc83047_0
  - torchvision=0.4.0=py36_cu100
  - wheel=0.37.0=pyhd3eb1b0_1
  - xz=5.2.5=h7b6447c_0
  - zlib=1.2.11=h7b6447c_3
  - zstd=1.4.9=haebb681_0
  - pip:
    - cycler==0.10.0
    - cython==0.29.24
    - decorator==4.4.2
    - easydict==1.9
    - fire==0.4.0
    - imageio==2.9.0
    - iou3d==0.0.0
    - kiwisolver==1.3.1
    - llvmlite==0.36.0
    - matplotlib==3.3.4
    - networkx==2.5.1
    - numba==0.53.1
    - opencv-python==4.5.3.56
    - progress==1.6
    - protobuf==3.18.0
    - pycocotools==2.0.2
    - pyparsing==2.4.7
    - python-dateutil==2.8.2
    - pywavelets==1.1.1
    - scikit-image==0.17.2
    - scipy==1.5.4
    - tensorboardx==2.4
    - termcolor==1.1.0
    - tifffile==2020.9.3
prefix: /home/leon/anaconda3/envs/KM3D