megvii-research / CVPR2023-UniDistill

CVPR2023 (highlight) - UniDistill: A Universal Cross-Modality Knowledge Distillation Framework for 3D Object Detection in Bird's-Eye View
Apache License 2.0
101 stars 10 forks source link

ModuleNotFoundError: No module named 'unidistill.data.multisensorfusion.nuScenes_multimodal' #2

Closed DHuiTnut closed 1 year ago

DHuiTnut commented 1 year ago

Hi, I am interested in your excellent work!

When I run the test script "python unidistill/exps/multisensor_fusion/nuscenes/BEVFusion/BEVFusion_nuscenes_centerhead_camera_exp.py -b 1 --gpus 1 -p --ckpt_path ", an error shown in title occured. Could you please check if there is any file missing?

The details are as follows:

Traceback (most recent call last): File "BEVFusion_nuscenes_centerhead_camera_exp.py", line 2, in from unidistill.exps.multisensor_fusion.nuscenes.BEVFusion.BEVFusion_nuscenes_centerhead_fusion_exp import ( File "/root/CVPR2023-UniDistill/unidistill/exps/multisensor_fusion/nuscenes/BEVFusion/BEVFusion_nuscenes_centerhead_fusion_exp.py", line 11, in from unidistill.exps.multisensor_fusion.nuscenes.BEVFusion.BEVFusion_nuscenes_base_exp import ( File "/root/CVPR2023-UniDistill/unidistill/exps/multisensor_fusion/nuscenes/BEVFusion/BEVFusion_nuscenes_base_exp.py", line 15, in from unidistill.data.multisensorfusion.nuscenes_multimodal import ( File "/root/CVPR2023-UniDistill/unidistill/data/multisensorfusion/nuscenes_multimodal.py", line 11, in from .nuScenes_multimodal import NuscenesMultiModalDataset ModuleNotFoundError: No module named 'unidistill.data.multisensorfusion.nuScenes_multimodal'

sczhou21 commented 1 year ago

Hello, thanks for your interest in our work. I will try to handle this problem. This error occurs because unidistill is not installed yet. You should run "python setup.py develop" first to install unidistill.

DHuiTnut commented 1 year ago

Thanks for your reply! I have run this command but the same error occured. image

kebijuelun commented 1 year ago

Thanks for your reply! I have run this command but the same error occured.

Hi, sorry for the late reply. We are currently unable to reproduce this issue internally. It seems that the installed library path has not been configured in the environment variable.

There are some things you can try to do to provide more information:

  1. Make sure the file exists: ls unidistill/data/multisensorfusion/nuScenes_multimodal.py
  2. Check if the path of the installed library meets expectations: python3 -c "import unidistill; print(unidistill)"
  3. Check if python and pip are in the expected conda environment: which python; which pip
  4. Try reinstall with --user: pip unintall Unidistill; python setup.py develop --user

If all attempts fail, maybe you could attempt to manually specify environment variables: export PYTHONPATH=<path-to-unidistill-root>:$PYTHONPATH

If you have more questions, please feel free to communicate with us.

DHuiTnut commented 1 year ago

Thanks! It works. There are files with the same name nuScenes_multimodal.py and nuscenes_multimodal.py under the folder, and git clone ignored the former.

By the way, which version of mmdet3d do you use? I checked the official documentation for mmdet3d and could not find the api description for the build_neck function under mmdet3d.models. Similarly, the build_backbone function of mmdet.models is the same.

https://github.com/megvii-research/CVPR2023-UniDistill/blob/d680cfc52c1db1e8454da4c6bc0c7064ef0683d6/unidistill/layers/blocks_3d/mmdet3d/lss_fpn.py#LL2C9-L2C9

kebijuelun commented 1 year ago

By the way, which version of mmdet3d do you use?

Sorry, the version for the mmdet3d version were not clearly stated. We have updated the installation introduction. The version of mmdet3d is 0.18.0. And we suggest referring to the new version of readme for environment configuration.

Install python3.6 + CUDAv10.2 + CUDNNv8.0.4 + pytorch(v1.9.0)

pip3 install mmcv-full==1.4.2 -f https://download.openmmlab.com/mmcv/dist/cu102/torch1.9.0/index.html
pip3 install mmdet==2.20.0
# download mmdet3d whl from https://drive.google.com/file/d/1y6AjikFQGc400dTim9IIFdvR8RNvnPId/view?usp=share_link
pip3 install mmdet3d-0.18.0-cp36-cp36m-linux_x86_64.whl
DHuiTnut commented 1 year ago

Thanks for your patience! Sorry to bother you again.

I could not find the function named TransFusionHead, TransFusionBBoxCoder in the source code file folder unidistill.layers.head.det3d. Could you provide these functions?

kebijuelun commented 1 year ago

I could not find the function named TransFusionHead, TransFusionBBoxCoder in the source code file folder unidistill.layers.head.det3d. Could you provide these functions?

The transfusionhead is an internal code implementation that is currently inconvenient to disclose, but we have release all related experiment code for centerhead. The missing code related to TransFusion does not affect the operation of the centerhead experiment. We recommend conducting relevant experiments on centerhead.

DHuiTnut commented 1 year ago

Get it.

Now I get the test results nuscenes_result.json and boxes.pkl. But the outputs log below shows "DATALOADER:0 TEST RESULTS {}", and the folder outputs is empty. Is this correct? If correct, how can I get visualized 3D detection results?

(dect3d) root@ceb06b3e585e:~/CVPR2023-UniDistill# python unidistill/exps/multisensor_fusion/nuscenes/BEVFusion/BEVFusion_nuscenes_centerhead_came ra_exp.py -b 1 --gpus 1 -p --ckpt_path /root/CVPR2023-UniDistill/checkpoints/lidar2camera/checkpoint/l2c_submit.pth /root/CVPR2023-UniDistill/unidistill/exps/multisensor_fusion/nuscenes/BEVFusion/BEVFusion_nuscenes_base_exp.py:37: UserWarning: TransFusionHead and related components are not included. warnings.warn("TransFusionHead and related components are not included.") Global seed set to 0 /root/anaconda3/envs/dect3d/lib/python3.6/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:287: LightningDeprecationWarning: Passing Trainer(accelerator='ddp') has been deprecated in v1.5 and will be removed in v1.7. Use Trainer(strategy='ddp') instead. f"Passing Trainer(accelerator={self.distributed_backend!r}) has been deprecated" GPU available: True, used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs Global seed set to 0 initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/1 ---------------------------------------------------------------------------------------------------- distributed_backend=nccl All distributed processes registered. Starting with 1 processes ---------------------------------------------------------------------------------------------------- Restoring states from the checkpoint path at /root/CVPR2023-UniDistill/checkpoints/lidar2camera/checkpoint/l2c_submit.pth LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3] Loaded model weights from checkpoint at /root/CVPR2023-UniDistill/checkpoints/lidar2camera/checkpoint/l2c_submit.pth Testing: 0it [00:00, ?it/s]/root/anaconda3/envs/dect3d/lib/python3.6/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.) return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode) Testing: 100%|##############################################################################################9| 6007/6008 [07:40<00:00, 14.01it/sGenerating submission results... | 0/6008 [00:00<?, ?it/s] process:0, start:0, end:376 process:1, start:376, end:752 process:2, start:752, end:1128 process:3, start:1128, end:1504 process:4, start:1504, end:1880 process:5, start:1880, end:2256 process:6, start:2256, end:2632 process:7, start:2632, end:3008 process:8, start:3008, end:3384 process:9, start:3384, end:3760 process:10, start:3760, end:4136 process:11, start:4136, end:4512 process:12, start:4512, end:4888 process:13, start:4888, end:5264 process:14, start:5264, end:5640 Testing: 100%|###############################################################################################| 6008/6008 [07:55<00:00, 14.01it/s]process:15, start:5640, end:6008 100%|#######################################################################################################| 6008/6008 [00:55<00:00, 109.20it/s] -------------------------------------------------------------------------------- DATALOADER:0 TEST RESULTS {} -------------------------------------------------------------------------------- Testing: 100%|#############################################################################################| 6008/6008 [08:40<00:00, 11.54it/s]

kebijuelun commented 1 year ago

Now I get the test results nuscenes_result.json and boxes.pkl. But the outputs log below shows "DATALOADER:0 TEST RESULTS {}", and the folder outputs is empty. Is this correct? If correct, how can I get visualized 3D detection results?

Now I get the test results nuscenes_result.json and boxes.pkl. But the outputs log below shows "DATALOADER:0 TEST RESULTS {}", and the folder outputs is empty. Is this correct? If correct, how can I get visualized 3D detection results?

It is great that you can run inference successfully. Sorry for that we currently do not have any visualization related code, as our main purpose of open-source is to provide some knowledge distillation related examples.

We suggest referring to a convenient and detailed official visualization implementation of Nuscenes: https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/tutorials/nuscenes_tutorial.ipynb (Refer to the Data Visualizations section).