mit-han-lab / bevfusion

[ICRA'23] BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation
https://bevfusion.mit.edu
Apache License 2.0
2.26k stars 409 forks source link

what is the lidar-only-det.pth #611

Closed wyf0414 closed 4 months ago

wyf0414 commented 5 months ago

This is the training code: torchpack dist-run -np 1 python tools/train.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth --load_from pretrained/lidar-only-det.pth

I want to know what is the lidar-only-det.pth. It's been used as a lidar-pretrained model.

I tried to val the lidar-only-det.pth, but got 0 mAP. Why?

Ask for your help.

zhijian-liu commented 4 months ago

The configuration file you used is for the fusion model. Please use the configuration for LiDAR-only detection instead: https://github.com/mit-han-lab/bevfusion/blob/main/configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml.

wyf0414 commented 4 months ago

Thanks very much!