myc634 / UltraLiDAR_nusc_waymo

MIT License
43 stars 3 forks source link

/usr/bin/python: No module named torch.distributed #1

Closed Zhangjyhhh closed 9 months ago

Zhangjyhhh commented 9 months ago

HI! Thanks for your excellent work ! when i run "./tools/dist_train.sh configs/ultralidar_kitti360.py 8" this command , it show:

(ultralidar) jyzhang@Makevoice:~/mmdetection3d/UltraLiDAR_nusc_waymo$ sudo bash ./tools/dist_train.sh configs/ultralidar_kitti360.py 8
/usr/bin/python: No module named torch.distributed

but actually, in this env, it has torch:

(ultralidar) jyzhang@Makevoice:~/mmdetection3d/UltraLiDAR_nusc_waymo$ conda list# packages in environment at /home/jyzhang/anaconda3/envs/ultralidar:
#
# Name                    Version                   Build  Channel
_libgcc_mutex             0.1                        main  
_openmp_mutex             5.1                       1_gnu  
absl-py                   2.0.0                    pypi_0    pypi
addict                    2.4.0                    pypi_0    pypi
aliyun-python-sdk-core    2.14.0                   pypi_0    pypi
aliyun-python-sdk-kms     2.16.2                   pypi_0    pypi
ansi2html                 1.8.0                    pypi_0    pypi
asttokens                 2.4.1                    pypi_0    pypi
astunparse                1.6.3                    pypi_0    pypi
attrs                     23.1.0                   pypi_0    pypi
backcall                  0.2.0                    pypi_0    pypi
black                     23.10.1                  pypi_0    pypi
blas                      1.0                         mkl  
blinker                   1.7.0                    pypi_0    pypi
bzip2                     1.0.8                h7f98852_4    conda-forge
ca-certificates           2023.11.17           hbcca054_0    conda-forge
cachetools                4.2.4                    pypi_0    pypi
certifi                   2023.7.22                pypi_0    pypi
cffi                      1.16.0                   pypi_0    pypi
charset-normalizer        3.3.2                    pypi_0    pypi
click                     8.1.7                    pypi_0    pypi
colorama                  0.4.6                    pypi_0    pypi
comm                      0.1.4                    pypi_0    pypi
configargparse            1.7                      pypi_0    pypi
contourpy                 1.1.1                    pypi_0    pypi
crcmod                    1.7                      pypi_0    pypi
cryptography              41.0.5                   pypi_0    pypi
cudatoolkit               11.3.1              h9edb442_10    conda-forge
cycler                    0.12.1                   pypi_0    pypi
dash                      2.14.1                   pypi_0    pypi
dash-core-components      2.0.0                    pypi_0    pypi
dash-html-components      2.0.0                    pypi_0    pypi
dash-table                5.0.0                    pypi_0    pypi
decorator                 5.1.1                    pypi_0    pypi
descartes                 1.1.0                    pypi_0    pypi
einops                    0.6.1                    pypi_0    pypi
exceptiongroup            1.1.3                    pypi_0    pypi
executing                 2.0.1                    pypi_0    pypi
fastjsonschema            2.18.1                   pypi_0    pypi
ffmpeg                    4.3                  hf484d3e_0    pytorch
fire                      0.5.0                    pypi_0    pypi
flake8                    6.1.0                    pypi_0    pypi
flask                     3.0.0                    pypi_0    pypi
fonttools                 4.43.1                   pypi_0    pypi
freetype                  2.10.4               h0708190_1    conda-forge
gast                      0.3.3                    pypi_0    pypi
gmp                       6.2.1                h58526e2_0    conda-forge
gnutls                    3.6.13               h85f3911_1    conda-forge
google-auth               1.35.0                   pypi_0    pypi
google-auth-oauthlib      0.4.6                    pypi_0    pypi
google-pasta              0.2.0                    pypi_0    pypi
grpcio                    1.59.2                   pypi_0    pypi
h5py                      2.10.0                   pypi_0    pypi
idna                      3.4                      pypi_0    pypi
imageio                   2.31.6                   pypi_0    pypi
importlib-metadata        6.8.0                    pypi_0    pypi
importlib-resources       6.1.0                    pypi_0    pypi
iniconfig                 2.0.0                    pypi_0    pypi
intel-openmp              2021.4.0          h06a4308_3561  
ipython                   8.12.2                   pypi_0    pypi
ipywidgets                8.1.1                    pypi_0    pypi
itsdangerous              2.1.2                    pypi_0    pypi
jbig                      2.1               h7f98852_2003    conda-forge
jedi                      0.19.1                   pypi_0    pypi
jinja2                    3.1.2                    pypi_0    pypi
jmespath                  0.10.0                   pypi_0    pypi
joblib                    1.3.2                    pypi_0    pypi
jpeg                      9e                   h166bdaf_1    conda-forge
jsonschema                4.19.2                   pypi_0    pypi
jsonschema-specifications 2023.7.1                 pypi_0    pypi
jupyter-core              5.5.0                    pypi_0    pypi
jupyterlab-widgets        3.0.9                    pypi_0    pypi
keras-preprocessing       1.1.2                    pypi_0    pypi
kiwisolver                1.4.5                    pypi_0    pypi
kornia                    0.6.12                   pypi_0    pypi
lame                      3.100             h7f98852_1001    conda-forge
lcms2                     2.12                 hddcbb42_0    conda-forge
ld_impl_linux-64          2.38                 h1181459_1  
lerc                      2.2.1                h9c3ff4c_0    conda-forge
libdeflate                1.7                  h7f98852_5    conda-forge
libffi                    3.4.4                h6a678d5_0  
libgcc-ng                 11.2.0               h1234567_1  
libgomp                   11.2.0               h1234567_1  
libiconv                  1.17                 h166bdaf_0    conda-forge
libpng                    1.6.37               h21135ba_2    conda-forge
libstdcxx-ng              11.2.0               h1234567_1  
libtiff                   4.3.0                hf544144_1    conda-forge
libuv                     1.43.0               h7f98852_0    conda-forge
libwebp-base              1.2.2                h7f98852_1    conda-forge
llvmlite                  0.36.0                   pypi_0    pypi
lyft-dataset-sdk          0.0.8                    pypi_0    pypi
lz4-c                     1.9.3                h9c3ff4c_1    conda-forge
markdown                  3.5.1                    pypi_0    pypi
markdown-it-py            3.0.0                    pypi_0    pypi
markupsafe                2.1.3                    pypi_0    pypi
matplotlib                3.5.3                    pypi_0    pypi
matplotlib-inline         0.1.6                    pypi_0    pypi
mccabe                    0.7.0                    pypi_0    pypi
mdurl                     0.1.2                    pypi_0    pypi
mkl                       2021.4.0           h06a4308_640  
mkl-service               2.4.0            py38h95df7f1_0    conda-forge
mkl_fft                   1.3.1            py38h8666266_1    conda-forge
mkl_random                1.2.2            py38h1abd341_0    conda-forge
mmcls                     0.25.0                   pypi_0    pypi
mmcv-full                 1.4.8                    pypi_0    pypi
mmdet                     2.28.2                   pypi_0    pypi
mmdet3d                   1.0.0rc1                  dev_0    <develop>
mmsegmentation            0.30.0                   pypi_0    pypi
model-index               0.1.11                   pypi_0    pypi
mypy-extensions           1.0.0                    pypi_0    pypi
nbformat                  5.7.0                    pypi_0    pypi
ncurses                   6.4                  h6a678d5_0  
nest-asyncio              1.5.8                    pypi_0    pypi
nettle                    3.6                  he412f7d_0    conda-forge
networkx                  2.2                      pypi_0    pypi
numba                     0.53.0                   pypi_0    pypi
numpy                     1.23.5                   pypi_0    pypi
numpy-base                1.24.3           py38h31eccc5_0  
nuscenes-devkit           1.1.11                   pypi_0    pypi
oauthlib                  3.2.2                    pypi_0    pypi
olefile                   0.47               pyhd8ed1ab_0    conda-forge
open3d                    0.17.0                   pypi_0    pypi
opencv-python             4.8.1.78                 pypi_0    pypi
opendatalab               0.0.10                   pypi_0    pypi
openh264                  2.1.1                h780b84a_0    conda-forge
openjpeg                  2.4.0                hb52868f_1    conda-forge
openmim                   0.3.9                    pypi_0    pypi
openssl                   3.0.12               h7f8727e_0  
openxlab                  0.0.32                   pypi_0    pypi
opt-einsum                3.3.0                    pypi_0    pypi
ordered-set               4.1.0                    pypi_0    pypi
oss2                      2.17.0                   pypi_0    pypi
packaging                 23.2                     pypi_0    pypi
pandas                    2.0.3                    pypi_0    pypi
parso                     0.8.3                    pypi_0    pypi
pathspec                  0.11.2                   pypi_0    pypi
pexpect                   4.8.0                    pypi_0    pypi
pickleshare               0.7.5                    pypi_0    pypi
pillow                    10.0.1                   pypi_0    pypi
pip                       23.3.1           py38h06a4308_0  
pkgutil-resolve-name      1.3.10                   pypi_0    pypi
platformdirs              3.11.0                   pypi_0    pypi
plotly                    5.18.0                   pypi_0    pypi
pluggy                    1.3.0                    pypi_0    pypi
plyfile                   1.0.1                    pypi_0    pypi
prettytable               3.9.0                    pypi_0    pypi
prompt-toolkit            3.0.39                   pypi_0    pypi
protobuf                  3.19.0                   pypi_0    pypi
ptyprocess                0.7.0                    pypi_0    pypi
pure-eval                 0.2.2                    pypi_0    pypi
pyasn1                    0.5.0                    pypi_0    pypi
pyasn1-modules            0.3.0                    pypi_0    pypi
pycocotools               2.0.7                    pypi_0    pypi
pycodestyle               2.11.1                   pypi_0    pypi
pycparser                 2.21                     pypi_0    pypi
pycryptodome              3.19.0                   pypi_0    pypi
pyflakes                  3.1.0                    pypi_0    pypi
pygments                  2.16.1                   pypi_0    pypi
pyparsing                 3.1.1                    pypi_0    pypi
pyquaternion              0.9.9                    pypi_0    pypi
pytest                    7.4.3                    pypi_0    pypi
python                    3.8.18               h955ad1f_0  
python-dateutil           2.8.2                    pypi_0    pypi
python_abi                3.8                      2_cp38    conda-forge
pytorch                   1.10.0          py3.8_cuda11.3_cudnn8.2.0_0    pytorch
pytorch-mutex             1.0                        cuda    pytorch
pytz                      2023.3.post1             pypi_0    pypi
pywavelets                1.4.1                    pypi_0    pypi
pyyaml                    6.0.1                    pypi_0    pypi
readline                  8.2                  h5eee18b_0  
referencing               0.30.2                   pypi_0    pypi
requests                  2.28.2                   pypi_0    pypi
requests-oauthlib         1.3.1                    pypi_0    pypi
retrying                  1.3.4                    pypi_0    pypi
rich                      13.4.2                   pypi_0    pypi
rpds-py                   0.10.6                   pypi_0    pypi
rsa                       4.9                      pypi_0    pypi
scikit-image              0.19.3                   pypi_0    pypi
scikit-learn              1.2.0                    pypi_0    pypi
scipy                     1.4.1                    pypi_0    pypi
setuptools                60.2.0                   pypi_0    pypi
shapely                   1.8.5.post1              pypi_0    pypi
six                       1.16.0             pyh6c4a22f_0    conda-forge
sqlite                    3.41.2               h5eee18b_0  
stack-data                0.6.3                    pypi_0    pypi
tabulate                  0.9.0                    pypi_0    pypi
tenacity                  8.2.3                    pypi_0    pypi
tensorboard               2.2.2                    pypi_0    pypi
tensorboard-data-server   0.7.2                    pypi_0    pypi
tensorboard-plugin-wit    1.8.1                    pypi_0    pypi
tensorflow-estimator      2.2.0                    pypi_0    pypi
tensorflow-gpu            2.2.0                    pypi_0    pypi
termcolor                 2.3.0                    pypi_0    pypi
terminaltables            3.1.10                   pypi_0    pypi
threadpoolctl             3.2.0                    pypi_0    pypi
tifffile                  2023.7.10                pypi_0    pypi
timm                      0.5.4                    pypi_0    pypi
tk                        8.6.12               h1ccaba5_0  
tomli                     2.0.1                    pypi_0    pypi
torchaudio                0.10.0+cu113             pypi_0    pypi
torchvision               0.11.1+cu113             pypi_0    pypi
tqdm                      4.65.2                   pypi_0    pypi
traitlets                 5.13.0                   pypi_0    pypi
trimesh                   2.35.39                  pypi_0    pypi
typing-extensions         4.8.0                    pypi_0    pypi
typing_extensions         4.9.0              pyha770c72_0    conda-forge
tzdata                    2023.3                   pypi_0    pypi
urllib3                   1.26.18                  pypi_0    pypi
waymo-open-dataset-tf-2-2-0 1.2.0                    pypi_0    pypi
wcwidth                   0.2.9                    pypi_0    pypi
werkzeug                  3.0.1                    pypi_0    pypi
wheel                     0.41.2           py38h06a4308_0  
widgetsnbextension        4.0.9                    pypi_0    pypi
wrapt                     1.16.0                   pypi_0    pypi
xz                        5.4.5                h5eee18b_0  
yapf                      0.40.1                   pypi_0    pypi
zipp                      3.17.0                   pypi_0    pypi
zlib                      1.2.13               h5eee18b_0  
zstd                      1.5.0                ha95c52a_0    conda-forge

and I check torch.distrubuted:

(ultralidar) jyzhang@Makevoice:~/mmdetection3d/UltraLiDAR_nusc_waymo$ python
Python 3.8.18 (default, Sep 11 2023, 13:40:15) 
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch.distributed as dist
>>> print (dist.__file__)
/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/__init__.py

So ,i don't know how to solve this issue, could you tell me ?

myc634 commented 9 months ago

Hi, thanks for your interest, could you please provide the script for install PyTorch?

Zhangjyhhh commented 9 months ago

@myc634 conda install pytorch==1.10.0 torchvision==0.11.0 torchaudio==0.10.0 cudatoolkit=11.3 -c pytorch -c conda-forge

Zhangjyhhh commented 9 months ago

Hi, thanks for your interest, could you please provide the script for install PyTorch? conda install pytorch==1.10.0 torchvision==0.11.0 torchaudio==0.10.0 cudatoolkit=11.3 -c pytorch -c conda-forge

myc634 commented 9 months ago

I don’t know the exact reason causing this issue, but I kindly recommend you following the README, try using pip to install PyTorch.

Zhangjyhhh commented 9 months ago

I don’t know the exact reason causing this issue, but I kindly recommend you following the README, try using pip to install PyTorch.

I solve this issue by running this command "python -m torch.distributed.launch --nproc_per_node=1 --master_port=29505 ./tools/train.py configs/ultralidar_nusc.py --launcher pytorch " And I just have single GPU. Is that right i set "--nproc_per_node=1" ? Do I need to change another file? I run the command above another issue happend,by the way, i use nuscenes datasets v1.0.0 .mini

(ultralidar) jyzhang@Makevoice:~/mmdetection3d/UltraLiDAR_nusc_waymo$ python -m torch.distributed.launch --nproc_per_node=1 --master_port=29505 ./tools/train.py  configs/ultralidar_nusc.py  --launcher pytorch 
/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

  warnings.warn(
plugin
2023-12-24 19:58:31,126 - mmdet - INFO - Environment info:
------------------------------------------------------------
sys.platform: linux
Python: 3.8.18 (default, Sep 11 2023, 13:40:15) [GCC 11.2.0]
CUDA available: True
GPU 0: NVIDIA GeForce RTX 2080
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.3, V11.3.109
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.10.0+cu113
PyTorch compiling details: PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 11.3
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
  - CuDNN 8.2
  - Magma 2.5.2
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, 

TorchVision: 0.11.1+cu113
OpenCV: 4.8.1
MMCV: 1.5.0
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 11.3
MMDetection: 2.28.2
MMSegmentation: 0.30.0
MMDetection3D: 1.0.0rc1+97e072b
------------------------------------------------------------

2023-12-24 19:58:31,931 - mmdet - INFO - Distributed training: True
2023-12-24 19:58:32,699 - mmdet - INFO - Config:
checkpoint_config = dict(interval=1)
log_config = dict(
    interval=50,
    hooks=[dict(type='TextLoggerHook'),
           dict(type='TensorboardLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = './work_dirs/nusc_stage1'
load_from = None
resume_from = None
workflow = [('train', 1)]
model_type = 'codebook_training'
batch_size = 1
point_cloud_range = [-50.0, -50.0, -4.0, 50.0, 50.0, 3.0]
voxel_size = [0.15625, 0.15625, 0.2]
class_names = [
    'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier',
    'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone'
]
plugin = True
plugin_dir = 'plugin/'
num_points = 30
model = dict(
    type='UltraLiDAR',
    model_type='codebook_training',
    pts_bbox_head=dict(
        type='CenterHead',
        in_channels=256,
        tasks=[
            dict(num_class=1, class_names=['car']),
            dict(num_class=2, class_names=['truck', 'construction_vehicle']),
            dict(num_class=2, class_names=['bus', 'trailer']),
            dict(num_class=1, class_names=['barrier']),
            dict(num_class=2, class_names=['motorcycle', 'bicycle']),
            dict(num_class=2, class_names=['pedestrian', 'traffic_cone'])
        ],
        common_heads=dict(
            reg=(2, 2), height=(1, 2), dim=(3, 2), rot=(2, 2), vel=(2, 2)),
        share_conv_channel=64,
        bbox_coder=dict(
            type='CenterPointBBoxCoder',
            pc_range=[-50.0, -50.0],
            post_center_range=[-61.2, -61.2, -10.0, 61.2, 61.2, 10.0],
            max_num=500,
            score_threshold=0.1,
            out_size_factor=8,
            voxel_size=[0.15625, 0.15625],
            code_size=9),
        separate_head=dict(
            type='SeparateHead', init_bias=-2.19, final_kernel=3),
        loss_cls=dict(type='GaussianFocalLoss', reduction='mean'),
        loss_bbox=dict(type='L1Loss', reduction='mean', loss_weight=0.25),
        norm_bbox=True),
    voxelizer=dict(
        type='Voxelizer',
        x_min=-50.0,
        x_max=50.0,
        y_min=-50.0,
        y_max=50.0,
        z_min=-4.0,
        z_max=3.0,
        step=0.15625,
        z_step=0.2),
    vector_quantizer=dict(
        type='VectorQuantizer',
        n_e=1024,
        e_dim=1024,
        beta=0.25,
        cosine_similarity=False),
    lidar_encoder=dict(type='VQEncoder', img_size=640, codebook_dim=1024),
    lidar_decoder=dict(
        type='VQDecoder',
        img_size=(640, 640),
        num_patches=6400,
        codebook_dim=1024),
    train_cfg=dict(
        pts=dict(
            point_cloud_range=[-50.0, -50.0, -4.0, 50.0, 50.0, 3.0],
            grid_size=[1024, 1024, 40],
            voxel_size=[0.15625, 0.15625, 0.2],
            out_size_factor=8,
            dense_reg=1,
            gaussian_overlap=0.1,
            max_objs=500,
            min_radius=2,
            code_weights=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0])),
    test_cfg=dict(
        pts=dict(
            pc_range=[-50.0, -50.0],
            post_center_limit_range=[-61.2, -61.2, -10.0, 61.2, 61.2, 10.0],
            max_per_img=500,
            max_pool_nms=False,
            min_radius=[4, 12, 10, 1, 0.85, 0.175],
            score_threshold=0.1,
            out_size_factor=8,
            voxel_size=[0.15625, 0.15625],
            pre_max_size=1000,
            post_max_size=83,
            nms_type=[
                'rotate', 'rotate', 'rotate', 'circle', 'rotate', 'rotate'
            ],
            nms_thr=[0.2, 0.2, 0.2, 0.2, 0.2, 0.5],
            nms_rescale_factor=[
                1.0, [0.7, 0.7], [0.4, 0.55], 1.1, [1.0, 1.0], [4.5, 9.0]
            ])))
dataset_type = 'NuscDataset'
data_root = '/home/jyzhang/datasets/nuScenes/'
file_client_args = dict(backend='disk')
bda_aug_conf = dict(
    rot_lim=(-22.5, 22.5),
    scale_lim=(0.95, 1.05),
    flip_dx_ratio=0.5,
    flip_dy_ratio=0.5)
train_pipeline = [
    dict(
        type='LoadPointsFromFile',
        coord_type='LIDAR',
        load_dim=5,
        use_dim=5,
        file_client_args=dict(backend='disk')),
    dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True),
    dict(
        type='PointsRangeFilter',
        point_cloud_range=[-50.0, -50.0, -4.0, 50.0, 50.0, 3.0]),
    dict(
        type='ObjectRangeFilter',
        point_cloud_range=[-50.0, -50.0, -4.0, 50.0, 50.0, 3.0]),
    dict(
        type='ObjectNameFilter',
        classes=[
            'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
            'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone'
        ]),
    dict(
        type='DefaultFormatBundle3D',
        class_names=[
            'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
            'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone'
        ]),
    dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d'])
]
input_modality = dict(
    use_lidar=True,
    use_camera=False,
    use_radar=False,
    use_map=False,
    use_external=False)
data = dict(
    samples_per_gpu=1,
    workers_per_gpu=1,
    train=dict(
        type='NuscDataset',
        data_root='/home/jyzhang/datasets/nuScenes/',
        ann_file='/home/jyzhang/datasets/nuScenes/nuscenes_infos_train.pkl',
        pipeline=[
            dict(
                type='LoadPointsFromFile',
                coord_type='LIDAR',
                load_dim=5,
                use_dim=5,
                file_client_args=dict(backend='disk')),
            dict(
                type='LoadAnnotations3D',
                with_bbox_3d=True,
                with_label_3d=True),
            dict(
                type='PointsRangeFilter',
                point_cloud_range=[-50.0, -50.0, -4.0, 50.0, 50.0, 3.0]),
            dict(
                type='ObjectRangeFilter',
                point_cloud_range=[-50.0, -50.0, -4.0, 50.0, 50.0, 3.0]),
            dict(
                type='ObjectNameFilter',
                classes=[
                    'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
                    'barrier', 'motorcycle', 'bicycle', 'pedestrian',
                    'traffic_cone'
                ]),
            dict(
                type='DefaultFormatBundle3D',
                class_names=[
                    'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
                    'barrier', 'motorcycle', 'bicycle', 'pedestrian',
                    'traffic_cone'
                ]),
            dict(
                type='Collect3D',
                keys=['points', 'gt_bboxes_3d', 'gt_labels_3d'])
        ],
        classes=[
            'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
            'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone'
        ],
        modality=dict(
            use_lidar=True,
            use_camera=False,
            use_radar=False,
            use_map=False,
            use_external=False),
        test_mode=False,
        box_type_3d='LiDAR'),
    val=dict(
        type='NuscDataset',
        data_root='/home/jyzhang/datasets/nuScenes/',
        ann_file='/home/jyzhang/datasets/nuScenes/nuscenes_infos_val.pkl',
        pipeline=[
            dict(
                type='LoadPointsFromFile',
                coord_type='LIDAR',
                load_dim=5,
                use_dim=5,
                file_client_args=dict(backend='disk')),
            dict(
                type='LoadAnnotations3D',
                with_bbox_3d=True,
                with_label_3d=True),
            dict(
                type='PointsRangeFilter',
                point_cloud_range=[-50.0, -50.0, -4.0, 50.0, 50.0, 3.0]),
            dict(
                type='ObjectRangeFilter',
                point_cloud_range=[-50.0, -50.0, -4.0, 50.0, 50.0, 3.0]),
            dict(
                type='ObjectNameFilter',
                classes=[
                    'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
                    'barrier', 'motorcycle', 'bicycle', 'pedestrian',
                    'traffic_cone'
                ]),
            dict(
                type='DefaultFormatBundle3D',
                class_names=[
                    'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
                    'barrier', 'motorcycle', 'bicycle', 'pedestrian',
                    'traffic_cone'
                ]),
            dict(
                type='Collect3D',
                keys=['points', 'gt_bboxes_3d', 'gt_labels_3d'])
        ],
        classes=[
            'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
            'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone'
        ],
        modality=dict(
            use_lidar=True,
            use_camera=False,
            use_radar=False,
            use_map=False,
            use_external=False),
        test_mode=True,
        box_type_3d='LiDAR'),
    test=dict(
        type='NuscDataset',
        data_root='/home/jyzhang/datasets/nuScenes/',
        ann_file='/home/jyzhang/datasets/nuScenes/nuscenes_infos_val.pkl',
        pipeline=[
            dict(
                type='LoadPointsFromFile',
                coord_type='LIDAR',
                load_dim=5,
                use_dim=5,
                file_client_args=dict(backend='disk')),
            dict(
                type='LoadAnnotations3D',
                with_bbox_3d=True,
                with_label_3d=True),
            dict(
                type='PointsRangeFilter',
                point_cloud_range=[-50.0, -50.0, -4.0, 50.0, 50.0, 3.0]),
            dict(
                type='ObjectRangeFilter',
                point_cloud_range=[-50.0, -50.0, -4.0, 50.0, 50.0, 3.0]),
            dict(
                type='ObjectNameFilter',
                classes=[
                    'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
                    'barrier', 'motorcycle', 'bicycle', 'pedestrian',
                    'traffic_cone'
                ]),
            dict(
                type='DefaultFormatBundle3D',
                class_names=[
                    'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
                    'barrier', 'motorcycle', 'bicycle', 'pedestrian',
                    'traffic_cone'
                ]),
            dict(
                type='Collect3D',
                keys=['points', 'gt_bboxes_3d', 'gt_labels_3d'])
        ],
        classes=[
            'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
            'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone'
        ],
        modality=dict(
            use_lidar=True,
            use_camera=False,
            use_radar=False,
            use_map=False,
            use_external=False),
        test_mode=True,
        box_type_3d='LiDAR'))
optimizer = dict(
    type='AdamW',
    lr=0.0008,
    betas=(0.9, 0.95),
    paramwise_cfg=dict(
        custom_keys=dict(
            absolute_pos_embed=dict(decay_mult=0.0),
            relative_position_bias_table=dict(decay_mult=0.0),
            norm=dict(decay_mult=0.0),
            embedding=dict(decay_mult=0.0),
            img_backbone=dict(lr_mult=0.1, decay_mult=0.001))),
    weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=5, norm_type=2))
lr_config = dict(
    policy='CosineAnnealing',
    warmup='linear',
    warmup_iters=500,
    warmup_ratio=0.3333333333333333,
    min_lr_ratio=0.001)
runner = dict(type='EpochBasedRunner', max_epochs=80)
checkpoint = None
find_unused_parameters = True
gpu_ids = range(0, 1)
device = 'cuda'

2023-12-24 19:58:32,699 - mmdet - INFO - Set random seed to 0, deterministic: False
/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2157.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
2023-12-24 19:58:33,653 - mmdet - INFO - Model:
UltraLiDAR(
  (voxelizer): Voxelizer()
  (vector_quantizer): VectorQuantizer(
    (embedding): Embedding(1024, 1024)
  )
  (pre_quant): Sequential(
    (0): Linear(in_features=1024, out_features=1024, bias=True)
    (1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
  )
  (lidar_encoder): VQEncoder(
    (patch_embed): PatchEmbed(
      (proj): Conv2d(40, 512, kernel_size=(8, 8), stride=(8, 8))
      (norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
    )
    (blocks): Sequential(
      (0): BasicLayer(
        dim=512, input_resolution=(80, 80), depth=12
        (blocks): ModuleList(
          (0): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (1): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (2): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (3): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (4): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (5): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (6): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (7): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (8): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (9): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (10): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (11): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
        )
      )
    )
    (norm): Sequential(
      (0): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
      (1): GELU()
    )
    (pre_quant): Linear(in_features=512, out_features=1024, bias=True)
  )
  (lidar_decoder): VQDecoder(
    (decoder_embed): Linear(in_features=1024, out_features=512, bias=True)
    (blocks): BasicLayer(
      dim=512, input_resolution=(80, 80), depth=12
      (blocks): ModuleList(
        (0): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (1): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (2): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (3): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (4): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (5): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (6): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (7): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (8): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (9): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (10): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (11): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
      )
    )
    (norm): Sequential(
      (0): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
      (1): GELU()
    )
    (pred): Linear(in_features=512, out_features=2560, bias=True)
  )
  (aug): Sequential(
    (0): RandomVerticalFlip(p=0.5, p_batch=1.0, same_on_batch=False)
    (1): RandomHorizontalFlip(p=0.5, p_batch=1.0, same_on_batch=False)
  )
)
collecting samples...
collected 323 samples in 0.02s
collecting samples...
collected 323 samples in 0.02s
2023-12-24 19:58:37,369 - mmdet - INFO - Start running, host: jyzhang@Makevoice, work_dir: /home/jyzhang/mmdetection3d/UltraLiDAR_nusc_waymo/work_dirs/nusc_stage1
2023-12-24 19:58:37,369 - mmdet - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH   ) CosineAnnealingLrUpdaterHook       
(NORMAL      ) CheckpointHook                     
(VERY_LOW    ) TextLoggerHook                     
(VERY_LOW    ) TensorboardLoggerHook              
 -------------------- 
before_train_epoch:
(VERY_HIGH   ) CosineAnnealingLrUpdaterHook       
(NORMAL      ) DistSamplerSeedHook                
(LOW         ) IterTimerHook                      
(VERY_LOW    ) TextLoggerHook                     
(VERY_LOW    ) TensorboardLoggerHook              
 -------------------- 
before_train_iter:
(VERY_HIGH   ) CosineAnnealingLrUpdaterHook       
(LOW         ) IterTimerHook                      
 -------------------- 
after_train_iter:
(ABOVE_NORMAL) OptimizerHook                      
(NORMAL      ) CheckpointHook                     
(LOW         ) IterTimerHook                      
(VERY_LOW    ) TextLoggerHook                     
(VERY_LOW    ) TensorboardLoggerHook              
 -------------------- 
after_train_epoch:
(NORMAL      ) CheckpointHook                     
(VERY_LOW    ) TextLoggerHook                     
(VERY_LOW    ) TensorboardLoggerHook              
 -------------------- 
before_val_epoch:
(NORMAL      ) DistSamplerSeedHook                
(LOW         ) IterTimerHook                      
(VERY_LOW    ) TextLoggerHook                     
(VERY_LOW    ) TensorboardLoggerHook              
 -------------------- 
before_val_iter:
(LOW         ) IterTimerHook                      
 -------------------- 
after_val_iter:
(LOW         ) IterTimerHook                      
 -------------------- 
after_val_epoch:
(VERY_LOW    ) TextLoggerHook                     
(VERY_LOW    ) TensorboardLoggerHook              
 -------------------- 
after_run:
(VERY_LOW    ) TextLoggerHook                     
(VERY_LOW    ) TensorboardLoggerHook              
 -------------------- 
2023-12-24 19:58:37,369 - mmdet - INFO - workflow: [('train', 1)], max: 80 epochs
2023-12-24 19:58:37,369 - mmdet - INFO - Checkpoints will be saved to /home/jyzhang/mmdetection3d/UltraLiDAR_nusc_waymo/work_dirs/nusc_stage1 by HardDiskBackend.
Traceback (most recent call last):
  File "./tools/train.py", line 277, in <module>
    main()
  File "./tools/train.py", line 266, in main
    train_detector(
  File "/home/jyzhang/mmdetection3d/UltraLiDAR_nusc_waymo/tools/mmdet_train.py", line 170, in train_detector
    runner.run(data_loaders, cfg.workflow)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
    epoch_runner(data_loaders[i], **kwargs)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train
    self.run_iter(data_batch, train_mode=True, **kwargs)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 29, in run_iter
    outputs = self.model.train_step(data_batch, self.optimizer,
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/mmcv/parallel/distributed.py", line 59, in train_step
    output = self.module.train_step(*inputs[0], **kwargs[0])
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/mmdet/models/detectors/base.py", line 248, in train_step
    losses = self(**data)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 110, in new_func
    return old_func(*args, **kwargs)
  File "/home/jyzhang/mmdetection3d/mmdet3d/models/detectors/base.py", line 60, in forward
    return self.forward_train(**kwargs)
  File "/home/jyzhang/mmdetection3d/UltraLiDAR_nusc_waymo/plugin/models/detectors/ultralidar.py", line 297, in forward_train
    losses = self.train_codebook(points)
  File "/home/jyzhang/mmdetection3d/UltraLiDAR_nusc_waymo/plugin/models/detectors/ultralidar.py", line 157, in train_codebook
    lidar_feats = self.lidar_encoder(voxels)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jyzhang/mmdetection3d/UltraLiDAR_nusc_waymo/plugin/models/necks/vq_layer.py", line 352, in forward
    x = self.patch_embed(x)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/timm/models/layers/patch_embed.py", line 35, in forward
    x = self.proj(x)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 446, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 442, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [512, 40, 8, 8], expected input[1, 35, 640, 640] to have 40 channels, but got 35 channels instead
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 13261) of binary: /home/jyzhang/anaconda3/envs/ultralidar/bin/python
Traceback (most recent call last):
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in <module>
    main()
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
    launch(args)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
    run(args)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/run.py", line 710, in run
    elastic_launch(
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
./tools/train.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-12-24_19:58:47
  host      : Makevoice
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 13261)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
Zhangjyhhh commented 9 months ago

the above issue is causing by changing parameters"point_cloud_range = [-50.0, -50.0, -5.0, 50.0, 50.0, 3.0]" and i change it back . now i met another issue : by the way, I just have single GPU. Is that right i set "--nproc_per_node=1" ? Do I need to change another file?

(ultralidar) jyzhang@Makevoice:~/mmdetection3d/UltraLiDAR_nusc_waymo$ python -m torch.distributed.launch --nproc_per_node=1 --master_port=29505 ./tools/train.py  configs/ultralidar_nusc.py  --launcher pytorch 
/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

  warnings.warn(
plugin
2023-12-24 20:41:03,431 - mmdet - INFO - Environment info:
------------------------------------------------------------
sys.platform: linux
Python: 3.8.18 (default, Sep 11 2023, 13:40:15) [GCC 11.2.0]
CUDA available: True
GPU 0: NVIDIA GeForce RTX 2080
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.3, V11.3.109
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.10.0+cu113
PyTorch compiling details: PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 11.3
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
  - CuDNN 8.2
  - Magma 2.5.2
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, 

TorchVision: 0.11.1+cu113
OpenCV: 4.8.1
MMCV: 1.5.0
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 11.3
MMDetection: 2.28.2
MMSegmentation: 0.30.0
MMDetection3D: 1.0.0rc1+97e072b
------------------------------------------------------------

2023-12-24 20:41:04,233 - mmdet - INFO - Distributed training: True
2023-12-24 20:41:04,996 - mmdet - INFO - Config:
checkpoint_config = dict(interval=1)
log_config = dict(
    interval=50,
    hooks=[dict(type='TextLoggerHook'),
           dict(type='TensorboardLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = './work_dirs/nusc_stage1'
load_from = None
resume_from = None
workflow = [('train', 1)]
model_type = 'codebook_training'
batch_size = 1
point_cloud_range = [-50.0, -50.0, -5.0, 50.0, 50.0, 3.0]
voxel_size = [0.15625, 0.15625, 0.2]
class_names = [
    'car', 'truck', 'construction_vehicle', 'bus', 'trailer', 'barrier',
    'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone'
]
plugin = True
plugin_dir = 'plugin/'
num_points = 30
model = dict(
    type='UltraLiDAR',
    model_type='codebook_training',
    pts_bbox_head=dict(
        type='CenterHead',
        in_channels=256,
        tasks=[
            dict(num_class=1, class_names=['car']),
            dict(num_class=2, class_names=['truck', 'construction_vehicle']),
            dict(num_class=2, class_names=['bus', 'trailer']),
            dict(num_class=1, class_names=['barrier']),
            dict(num_class=2, class_names=['motorcycle', 'bicycle']),
            dict(num_class=2, class_names=['pedestrian', 'traffic_cone'])
        ],
        common_heads=dict(
            reg=(2, 2), height=(1, 2), dim=(3, 2), rot=(2, 2), vel=(2, 2)),
        share_conv_channel=64,
        bbox_coder=dict(
            type='CenterPointBBoxCoder',
            pc_range=[-50.0, -50.0],
            post_center_range=[-61.2, -61.2, -10.0, 61.2, 61.2, 10.0],
            max_num=500,
            score_threshold=0.1,
            out_size_factor=8,
            voxel_size=[0.15625, 0.15625],
            code_size=9),
        separate_head=dict(
            type='SeparateHead', init_bias=-2.19, final_kernel=3),
        loss_cls=dict(type='GaussianFocalLoss', reduction='mean'),
        loss_bbox=dict(type='L1Loss', reduction='mean', loss_weight=0.25),
        norm_bbox=True),
    voxelizer=dict(
        type='Voxelizer',
        x_min=-50.0,
        x_max=50.0,
        y_min=-50.0,
        y_max=50.0,
        z_min=-5.0,
        z_max=3.0,
        step=0.15625,
        z_step=0.2),
    vector_quantizer=dict(
        type='VectorQuantizer',
        n_e=1024,
        e_dim=1024,
        beta=0.25,
        cosine_similarity=False),
    lidar_encoder=dict(type='VQEncoder', img_size=640, codebook_dim=1024),
    lidar_decoder=dict(
        type='VQDecoder',
        img_size=(640, 640),
        num_patches=6400,
        codebook_dim=1024),
    train_cfg=dict(
        pts=dict(
            point_cloud_range=[-50.0, -50.0, -5.0, 50.0, 50.0, 3.0],
            grid_size=[1024, 1024, 40],
            voxel_size=[0.15625, 0.15625, 0.2],
            out_size_factor=8,
            dense_reg=1,
            gaussian_overlap=0.1,
            max_objs=500,
            min_radius=2,
            code_weights=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0])),
    test_cfg=dict(
        pts=dict(
            pc_range=[-50.0, -50.0],
            post_center_limit_range=[-61.2, -61.2, -10.0, 61.2, 61.2, 10.0],
            max_per_img=500,
            max_pool_nms=False,
            min_radius=[4, 12, 10, 1, 0.85, 0.175],
            score_threshold=0.1,
            out_size_factor=8,
            voxel_size=[0.15625, 0.15625],
            pre_max_size=1000,
            post_max_size=83,
            nms_type=[
                'rotate', 'rotate', 'rotate', 'circle', 'rotate', 'rotate'
            ],
            nms_thr=[0.2, 0.2, 0.2, 0.2, 0.2, 0.5],
            nms_rescale_factor=[
                1.0, [0.7, 0.7], [0.4, 0.55], 1.1, [1.0, 1.0], [4.5, 9.0]
            ])))
dataset_type = 'NuscDataset'
data_root = '/home/jyzhang/datasets/nuScenes/'
file_client_args = dict(backend='disk')
bda_aug_conf = dict(
    rot_lim=(-22.5, 22.5),
    scale_lim=(0.95, 1.05),
    flip_dx_ratio=0.5,
    flip_dy_ratio=0.5)
train_pipeline = [
    dict(
        type='LoadPointsFromFile',
        coord_type='LIDAR',
        load_dim=5,
        use_dim=5,
        file_client_args=dict(backend='disk')),
    dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True),
    dict(
        type='PointsRangeFilter',
        point_cloud_range=[-50.0, -50.0, -5.0, 50.0, 50.0, 3.0]),
    dict(
        type='ObjectRangeFilter',
        point_cloud_range=[-50.0, -50.0, -5.0, 50.0, 50.0, 3.0]),
    dict(
        type='ObjectNameFilter',
        classes=[
            'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
            'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone'
        ]),
    dict(
        type='DefaultFormatBundle3D',
        class_names=[
            'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
            'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone'
        ]),
    dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d'])
]
input_modality = dict(
    use_lidar=True,
    use_camera=False,
    use_radar=False,
    use_map=False,
    use_external=False)
data = dict(
    samples_per_gpu=1,
    workers_per_gpu=8,
    train=dict(
        type='NuscDataset',
        data_root='/home/jyzhang/datasets/nuScenes/',
        ann_file='/home/jyzhang/datasets/nuScenes/nuscenes_infos_train.pkl',
        pipeline=[
            dict(
                type='LoadPointsFromFile',
                coord_type='LIDAR',
                load_dim=5,
                use_dim=5,
                file_client_args=dict(backend='disk')),
            dict(
                type='LoadAnnotations3D',
                with_bbox_3d=True,
                with_label_3d=True),
            dict(
                type='PointsRangeFilter',
                point_cloud_range=[-50.0, -50.0, -5.0, 50.0, 50.0, 3.0]),
            dict(
                type='ObjectRangeFilter',
                point_cloud_range=[-50.0, -50.0, -5.0, 50.0, 50.0, 3.0]),
            dict(
                type='ObjectNameFilter',
                classes=[
                    'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
                    'barrier', 'motorcycle', 'bicycle', 'pedestrian',
                    'traffic_cone'
                ]),
            dict(
                type='DefaultFormatBundle3D',
                class_names=[
                    'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
                    'barrier', 'motorcycle', 'bicycle', 'pedestrian',
                    'traffic_cone'
                ]),
            dict(
                type='Collect3D',
                keys=['points', 'gt_bboxes_3d', 'gt_labels_3d'])
        ],
        classes=[
            'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
            'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone'
        ],
        modality=dict(
            use_lidar=True,
            use_camera=False,
            use_radar=False,
            use_map=False,
            use_external=False),
        test_mode=False,
        box_type_3d='LiDAR'),
    val=dict(
        type='NuscDataset',
        data_root='/home/jyzhang/datasets/nuScenes/',
        ann_file='/home/jyzhang/datasets/nuScenes/nuscenes_infos_val.pkl',
        pipeline=[
            dict(
                type='LoadPointsFromFile',
                coord_type='LIDAR',
                load_dim=5,
                use_dim=5,
                file_client_args=dict(backend='disk')),
            dict(
                type='LoadAnnotations3D',
                with_bbox_3d=True,
                with_label_3d=True),
            dict(
                type='PointsRangeFilter',
                point_cloud_range=[-50.0, -50.0, -5.0, 50.0, 50.0, 3.0]),
            dict(
                type='ObjectRangeFilter',
                point_cloud_range=[-50.0, -50.0, -5.0, 50.0, 50.0, 3.0]),
            dict(
                type='ObjectNameFilter',
                classes=[
                    'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
                    'barrier', 'motorcycle', 'bicycle', 'pedestrian',
                    'traffic_cone'
                ]),
            dict(
                type='DefaultFormatBundle3D',
                class_names=[
                    'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
                    'barrier', 'motorcycle', 'bicycle', 'pedestrian',
                    'traffic_cone'
                ]),
            dict(
                type='Collect3D',
                keys=['points', 'gt_bboxes_3d', 'gt_labels_3d'])
        ],
        classes=[
            'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
            'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone'
        ],
        modality=dict(
            use_lidar=True,
            use_camera=False,
            use_radar=False,
            use_map=False,
            use_external=False),
        test_mode=True,
        box_type_3d='LiDAR'),
    test=dict(
        type='NuscDataset',
        data_root='/home/jyzhang/datasets/nuScenes/',
        ann_file='/home/jyzhang/datasets/nuScenes/nuscenes_infos_val.pkl',
        pipeline=[
            dict(
                type='LoadPointsFromFile',
                coord_type='LIDAR',
                load_dim=5,
                use_dim=5,
                file_client_args=dict(backend='disk')),
            dict(
                type='LoadAnnotations3D',
                with_bbox_3d=True,
                with_label_3d=True),
            dict(
                type='PointsRangeFilter',
                point_cloud_range=[-50.0, -50.0, -5.0, 50.0, 50.0, 3.0]),
            dict(
                type='ObjectRangeFilter',
                point_cloud_range=[-50.0, -50.0, -5.0, 50.0, 50.0, 3.0]),
            dict(
                type='ObjectNameFilter',
                classes=[
                    'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
                    'barrier', 'motorcycle', 'bicycle', 'pedestrian',
                    'traffic_cone'
                ]),
            dict(
                type='DefaultFormatBundle3D',
                class_names=[
                    'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
                    'barrier', 'motorcycle', 'bicycle', 'pedestrian',
                    'traffic_cone'
                ]),
            dict(
                type='Collect3D',
                keys=['points', 'gt_bboxes_3d', 'gt_labels_3d'])
        ],
        classes=[
            'car', 'truck', 'construction_vehicle', 'bus', 'trailer',
            'barrier', 'motorcycle', 'bicycle', 'pedestrian', 'traffic_cone'
        ],
        modality=dict(
            use_lidar=True,
            use_camera=False,
            use_radar=False,
            use_map=False,
            use_external=False),
        test_mode=True,
        box_type_3d='LiDAR'))
optimizer = dict(
    type='AdamW',
    lr=0.0008,
    betas=(0.9, 0.95),
    paramwise_cfg=dict(
        custom_keys=dict(
            absolute_pos_embed=dict(decay_mult=0.0),
            relative_position_bias_table=dict(decay_mult=0.0),
            norm=dict(decay_mult=0.0),
            embedding=dict(decay_mult=0.0),
            img_backbone=dict(lr_mult=0.1, decay_mult=0.001))),
    weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=5, norm_type=2))
lr_config = dict(
    policy='CosineAnnealing',
    warmup='linear',
    warmup_iters=500,
    warmup_ratio=0.3333333333333333,
    min_lr_ratio=0.001)
runner = dict(type='EpochBasedRunner', max_epochs=80)
checkpoint = None
find_unused_parameters = True
gpu_ids = range(0, 1)
device = 'cuda'

2023-12-24 20:41:04,996 - mmdet - INFO - Set random seed to 0, deterministic: False
/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2157.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
2023-12-24 20:41:05,935 - mmdet - INFO - Model:
UltraLiDAR(
  (voxelizer): Voxelizer()
  (vector_quantizer): VectorQuantizer(
    (embedding): Embedding(1024, 1024)
  )
  (pre_quant): Sequential(
    (0): Linear(in_features=1024, out_features=1024, bias=True)
    (1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
  )
  (lidar_encoder): VQEncoder(
    (patch_embed): PatchEmbed(
      (proj): Conv2d(40, 512, kernel_size=(8, 8), stride=(8, 8))
      (norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
    )
    (blocks): Sequential(
      (0): BasicLayer(
        dim=512, input_resolution=(80, 80), depth=12
        (blocks): ModuleList(
          (0): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (1): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (2): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (3): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (4): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (5): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (6): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (7): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (8): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (9): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (10): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
          (11): SwinTransformerBlock(
            (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (attn): WindowAttention(
              (qkv): Linear(in_features=512, out_features=1536, bias=True)
              (attn_drop): Dropout(p=0.0, inplace=False)
              (proj): Linear(in_features=512, out_features=512, bias=True)
              (proj_drop): Dropout(p=0.0, inplace=False)
              (softmax): Softmax(dim=-1)
            )
            (drop_path): Identity()
            (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
            (mlp): Mlp(
              (fc1): Linear(in_features=512, out_features=2048, bias=True)
              (act): GELU()
              (drop1): Dropout(p=0.0, inplace=False)
              (fc2): Linear(in_features=2048, out_features=512, bias=True)
              (drop2): Dropout(p=0.0, inplace=False)
            )
          )
        )
      )
    )
    (norm): Sequential(
      (0): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
      (1): GELU()
    )
    (pre_quant): Linear(in_features=512, out_features=1024, bias=True)
  )
  (lidar_decoder): VQDecoder(
    (decoder_embed): Linear(in_features=1024, out_features=512, bias=True)
    (blocks): BasicLayer(
      dim=512, input_resolution=(80, 80), depth=12
      (blocks): ModuleList(
        (0): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (1): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (2): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (3): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (4): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (5): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (6): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (7): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (8): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (9): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (10): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (11): SwinTransformerBlock(
          (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            (qkv): Linear(in_features=512, out_features=1536, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=512, out_features=512, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
      )
    )
    (norm): Sequential(
      (0): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
      (1): GELU()
    )
    (pred): Linear(in_features=512, out_features=2560, bias=True)
  )
  (aug): Sequential(
    (0): RandomVerticalFlip(p=0.5, p_batch=1.0, same_on_batch=False)
    (1): RandomHorizontalFlip(p=0.5, p_batch=1.0, same_on_batch=False)
  )
)
collecting samples...
collected 323 samples in 0.02s
collecting samples...
collected 323 samples in 0.02s
2023-12-24 20:41:09,639 - mmdet - INFO - Start running, host: jyzhang@Makevoice, work_dir: /home/jyzhang/mmdetection3d/UltraLiDAR_nusc_waymo/work_dirs/nusc_stage1
2023-12-24 20:41:09,640 - mmdet - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH   ) CosineAnnealingLrUpdaterHook       
(NORMAL      ) CheckpointHook                     
(VERY_LOW    ) TextLoggerHook                     
(VERY_LOW    ) TensorboardLoggerHook              
 -------------------- 
before_train_epoch:
(VERY_HIGH   ) CosineAnnealingLrUpdaterHook       
(NORMAL      ) DistSamplerSeedHook                
(LOW         ) IterTimerHook                      
(VERY_LOW    ) TextLoggerHook                     
(VERY_LOW    ) TensorboardLoggerHook              
 -------------------- 
before_train_iter:
(VERY_HIGH   ) CosineAnnealingLrUpdaterHook       
(LOW         ) IterTimerHook                      
 -------------------- 
after_train_iter:
(ABOVE_NORMAL) OptimizerHook                      
(NORMAL      ) CheckpointHook                     
(LOW         ) IterTimerHook                      
(VERY_LOW    ) TextLoggerHook                     
(VERY_LOW    ) TensorboardLoggerHook              
 -------------------- 
after_train_epoch:
(NORMAL      ) CheckpointHook                     
(VERY_LOW    ) TextLoggerHook                     
(VERY_LOW    ) TensorboardLoggerHook              
 -------------------- 
before_val_epoch:
(NORMAL      ) DistSamplerSeedHook                
(LOW         ) IterTimerHook                      
(VERY_LOW    ) TextLoggerHook                     
(VERY_LOW    ) TensorboardLoggerHook              
 -------------------- 
before_val_iter:
(LOW         ) IterTimerHook                      
 -------------------- 
after_val_iter:
(LOW         ) IterTimerHook                      
 -------------------- 
after_val_epoch:
(VERY_LOW    ) TextLoggerHook                     
(VERY_LOW    ) TensorboardLoggerHook              
 -------------------- 
after_run:
(VERY_LOW    ) TextLoggerHook                     
(VERY_LOW    ) TensorboardLoggerHook              
 -------------------- 
2023-12-24 20:41:09,640 - mmdet - INFO - workflow: [('train', 1)], max: 80 epochs
2023-12-24 20:41:09,640 - mmdet - INFO - Checkpoints will be saved to /home/jyzhang/mmdetection3d/UltraLiDAR_nusc_waymo/work_dirs/nusc_stage1 by HardDiskBackend.
Traceback (most recent call last):
  File "./tools/train.py", line 277, in <module>
    main()
  File "./tools/train.py", line 266, in main
    train_detector(
  File "/home/jyzhang/mmdetection3d/UltraLiDAR_nusc_waymo/tools/mmdet_train.py", line 170, in train_detector
    runner.run(data_loaders, cfg.workflow)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
    epoch_runner(data_loaders[i], **kwargs)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train
    self.run_iter(data_batch, train_mode=True, **kwargs)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 29, in run_iter
    outputs = self.model.train_step(data_batch, self.optimizer,
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/mmcv/parallel/distributed.py", line 59, in train_step
    output = self.module.train_step(*inputs[0], **kwargs[0])
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/mmdet/models/detectors/base.py", line 248, in train_step
    losses = self(**data)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 110, in new_func
    return old_func(*args, **kwargs)
  File "/home/jyzhang/mmdetection3d/mmdet3d/models/detectors/base.py", line 60, in forward
    return self.forward_train(**kwargs)
  File "/home/jyzhang/mmdetection3d/UltraLiDAR_nusc_waymo/plugin/models/detectors/ultralidar.py", line 297, in forward_train
    losses = self.train_codebook(points)
  File "/home/jyzhang/mmdetection3d/UltraLiDAR_nusc_waymo/plugin/models/detectors/ultralidar.py", line 161, in train_codebook
    lidar_rec = self.lidar_decoder(lidar_quant)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jyzhang/mmdetection3d/UltraLiDAR_nusc_waymo/plugin/models/necks/vq_layer.py", line 448, in forward
    x = self.blocks(x)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/timm/models/swin_transformer.py", line 413, in forward
    x = blk(x)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/timm/models/swin_transformer.py", line 310, in forward
    x = x + self.drop_path(self.mlp(self.norm2(x)))
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/timm/models/layers/mlp.py", line 26, in forward
    x = self.fc1(x)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 103, in forward
    return F.linear(input, self.weight, self.bias)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/nn/functional.py", line 1848, in linear
    return torch._C._nn.linear(input, weight, bias)
RuntimeError: CUDA out of memory. Tried to allocate 50.00 MiB (GPU 0; 7.79 GiB total capacity; 5.61 GiB already allocated; 56.81 MiB free; 5.79 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 17327) of binary: /home/jyzhang/anaconda3/envs/ultralidar/bin/python
Traceback (most recent call last):
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in <module>
    main()
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
    launch(args)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
    run(args)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/run.py", line 710, in run
    elastic_launch(
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
./tools/train.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-12-24_20:41:44
  host      : Makevoice
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 17327)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Does that mean I should use more memory GPU? Are there any way to solve it without using larger memory GPU?

myc634 commented 9 months ago

Yes, the --nproc_per_node=1 is right. For RTX 2080 GPU, maybe grad checkpointing works for you

Zhangjyhhh commented 9 months ago

Yes, the --nproc_per_node=1 is right. For RTX 2080 GPU, maybe grad checkpointing works for you

thanks for your reply

Zhangjyhhh commented 9 months ago

@myc634 In eval step 0,I met another issue:

(ultralidar) jyzhang@sumig-System-Product-Name:~/mmdetection3d/UltraLiDAR_nusc_waymo$ python -m torch.distributed.launch --nproc_per_node=1 --master_port=29501 ./tools/test.py ./configs/ultralidar_nusc_static_blank_code.py  --eval "mIoU"
/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

  warnings.warn(
Traceback (most recent call last):
  File "./tools/test.py", line 15, in <module>
    from plugin.datasets.builder import build_dataloader
ModuleNotFoundError: No module named 'plugin'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 3771) of binary: /home/jyzhang/anaconda3/envs/ultralidar/bin/python
Traceback (most recent call last):
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in <module>
    main()
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
    launch(args)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
    run(args)
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/run.py", line 710, in run
    elastic_launch(
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/jyzhang/anaconda3/envs/ultralidar/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
./tools/test.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-12-25_16:10:43
  host      : sumig-System-Product-Name
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 3771)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

how to solve it ?