Nightmare-n / GD-MAE

GD-MAE: Generative Decoder for MAE Pre-training on LiDAR Point Clouds (CVPR 2023)
Apache License 2.0
114 stars 6 forks source link

关于graph r-cnn voi #12

Closed Karkers closed 1 year ago

Karkers commented 1 year ago

我有以下几个问题(因为我接触深度学习不久,所以问题有点多。感谢dalao的指导)

  1. 我的环境是单卡3060,在训练voi模型时,我将lr调整为0.00025,batchsize=8,最后得到的结果为 3d AP:93.3552, 85.9809, 83.1834 与预训练模型的结果 3d AP:95.4233, 86.5286, 83.7810 对于我这个参数你有什么建议吗?
  2. 我在运行端到端的训练命令时,得到的AP=0,请问我该如何解决这个问题。
  3. 我对voi.ymal文件的配置参数进行了修改:主要是添加了剩下两类的CLASS以及anchor,但是无法运行。

CLASS_NAMES: ['Car', 'Pedestrian', 'Cyclist']

DATA_CONFIG: _BASECONFIG: cfgs/dataset_configs/kitti_dataset.yaml DATA_PROCESSOR:

MODEL: NAME: SECONDNet

VFE:
    NAME: DynVFE
    TYPE: mean

BACKBONE_3D:
    NAME: VoxelBackBone8x

MAP_TO_BEV:
    NAME: HeightCompression
    NUM_BEV_FEATURES: 256

BACKBONE_2D:
    NAME: BaseBEVBackbone

    LAYER_NUMS: [4, 4]
    LAYER_STRIDES: [1, 2]
    NUM_FILTERS: [64, 128]
    UPSAMPLE_STRIDES: [1, 2]
    NUM_UPSAMPLE_FILTERS: [128, 128]

DENSE_HEAD:
    NAME: AnchorHeadSingle
    CLASS_AGNOSTIC: False

    USE_DIRECTION_CLASSIFIER: True
    DIR_OFFSET: 0.78539
    DIR_LIMIT_OFFSET: 0.0
    NUM_DIR_BINS: 2

    ANCHOR_GENERATOR_CONFIG: [
        {
            'class_name': 'Car',
            'anchor_sizes': [[3.9, 1.6, 1.56]],
            'anchor_rotations': [0, 1.57],
            'anchor_bottom_heights': [-1.78],
            'align_center': False,
            'feature_map_stride': 8,
            'matched_threshold': 0.6,
            'unmatched_threshold': 0.45
        },
        **{
            'class_name': 'Pedestrian',
            'anchor_sizes': [[0.6, 0.8, 1.73]],
            'anchor_rotations': [0, 1.57],
            'anchor_bottom_heights': [-0.6],
            'align_center': False,
            'feature_map_stride': 8,
            'matched_threshold': 0.5,
            'unmatched_threshold': 0.35
        },
        {
            'class_name': 'Cyclist',
            'anchor_sizes': [[1.76, 0.597, 1.736]],
            'anchor_rotations': [0, 1.57],
            'anchor_bottom_heights': [-0.6],
            'align_center': False,
            'feature_map_stride': 8,
            'matched_threshold': 0.5,
            'unmatched_threshold': 0.35
        }**
    ]

    TARGET_ASSIGNER_CONFIG:
        NAME: AxisAlignedTargetAssigner
        POS_FRACTION: -1.0
        SAMPLE_SIZE: 512
        NORM_BY_NUM_EXAMPLES: False
        MATCH_HEIGHT: False
        BOX_CODER: ResidualCoder

    LOSS_CONFIG:
        LOSS_WEIGHTS: {
            'cls_weight': 1.0,
            'loc_weight': 2.0,
            'dir_weight': 0.2,
            'code_weights': [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
        }

POST_PROCESSING:
    RECALL_THRESH_LIST: [0.3, 0.5, 0.7]
    SCORE_THRESH: 0.3
    OUTPUT_RAW_SCORE: False

    EVAL_METRIC: kitti

    NMS_CONFIG:
        MULTI_CLASSES_NMS: False
        NMS_TYPE: nms_gpu
        NMS_THRESH: 0.01
        NMS_PRE_MAXSIZE: 4096
        NMS_POST_MAXSIZE: 500

OPTIMIZATION: BATCH_SIZE_PER_GPU: 4 NUM_EPOCHS: 80

OPTIMIZER: adam_onecycle
LR: 0.003
WEIGHT_DECAY: 0.01
MOMENTUM: 0.9

MOMS: [0.95, 0.85]
PCT_START: 0.4
DIV_FACTOR: 10
DECAY_STEP_LIST: [35, 45]
LR_DECAY: 0.1
LR_CLIP: 0.0000001

LR_WARMUP
Nightmare-n commented 1 year ago
  1. 如果第一个结果是graph-vo, 第二个结果是graph-voi的话, 和我提供的结果就差不多了, 多跑几次应该能得到相近的结果(一般会有0.1-0.3左右的波动). 我一般都是用4卡跑的,batch_size一般设置为4-8, 根据显存调整. 学习率我一般都是用默认的3e-3, 其他的学习率可以自己试一下, 可能有更好的结果, 我没试过.
  2. 端到端的训练的话目前支持的是graph-po, 其他二阶段模型都会冻结住第一阶段的参数(看配置文件里的FREEZE_LAYERS), 所以直接端到端训练的话, 第一阶段的参数是随机初始化的, 训练的过程中不会调整, AP=0似乎也比较合理. 正确的做法是先训练第一阶段, 再训练第二阶段, 可以参考scripts/dist_ts_train.sh. 对于graph-voi的话, image branch是不训练的, 所以也是提前加载预训练过的centernet参数(提供的ckpt里有对应的image branch的参数).
  3. 你可以先试试用second_mini跑通3个类别的, 再调整二阶段的模型, 应该只用添加剩下两类的CLASS以及anchor, 你可以把报错信息发上来看看.
Karkers commented 1 year ago

感谢,关于其他类别报错,我先尝试一下second_mini不同类别的训练。

SiHengHeHSH commented 1 year ago

Hi, I want to ask if the pretrained model second_mini is trained 77 epoch., And what epoch the graph_rcnn_vo train?

Karkers commented 1 year ago

我添加了剩余的两个类别,并且添加了anchor,second mini无法运行,显示以下错误。(这个anchor我是从gdmae的模型里复制过来的)

Traceback (most recent call last): | 0/928 [00:00<?, ?it/s] File "train.py", line 205, in main() File "train.py", line 174, in main merge_all_iters_to_one_epoch=args.merge_all_iters_to_one_epoch File "/home/karker/PROJ/GD-MAE/tools/train_utils/train_utils.py", line 120, in train_model dataloader_iter=dataloader_iter File "/home/karker/PROJ/GD-MAE/tools/train_utils/train_utils.py", line 46, in train_one_epoch loss, tb_dict, disp_dict = model_func(model, batch, global_step=accumulated_iter) File "../pcdet/models/init.py", line 31, in model_func ret_dict, tb_dict, disp_dict = model(batch_dict) File "/home/karker/anaconda3/envs/gd-mae/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, kwargs) File "/home/karker/anaconda3/envs/gd-mae/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 886, in forward output = self.module(*inputs[0], *kwargs[0]) File "/home/karker/anaconda3/envs/gd-mae/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(input, kwargs) File "../pcdet/models/detectors/second_net.py", line 14, in forward loss, tb_dict, disp_dict = self.get_training_loss() File "../pcdet/models/detectors/second_net.py", line 27, in get_training_loss loss_rpn, tb_dict = self.dense_head.get_loss() File "../pcdet/models/dense_heads/anchor_head_template.py", line 217, in get_loss cls_loss, tb_dict = self.get_cls_layer_loss(tb_dict) File "../pcdet/models/dense_heads/anchor_head_template.py", line 123, in get_cls_layer_loss cls_loss_src = self.cls_loss_func(cls_preds, one_hot_targets, weights=cls_weights) # [N, M] File "/home/karker/anaconda3/envs/gd-mae/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, *kwargs) File "../pcdet/utils/loss_utils.py", line 60, in forward pt = target (1.0 - pred_sigmoid) + (1.0 - target) * pred_sigmoid RuntimeError: The size of tensor a (13516800) must match the size of tensor b (211200) at non-singleton dimension 1

SiHengHeHSH commented 1 year ago

对于graph-voi的话, image branch是不训练的, 所以也是提前加载预训练过的centernet参数(提供的ckpt里有对应的image branch的参数)。您好,我想问一下这句话说graph-voi提前加载预训练过的centernet参数,请问如何提前加载预训练过的centernet参数?提供的ckpt指的是哪一个?

SiHengHeHSH commented 1 year ago

哥,能不能把centernet的预训练模型发啊给我一份,这对我非常重要,实在没办法了

Karkers commented 1 year ago

哥,能不能把centernet的预训练模型发啊给我一份,这对我非常重要,实在没办法了

我在训练voi的时候用的就是作者提供的模型

Karkers commented 10 months ago

哥,能不能把centernet的预训练模型发啊给我一份,这对我非常重要,实在没办法了

https://drive.google.com/uc?id=173eCABB3Hw261q50v5maTP4zOQJ0qfTd&export=download