Open cowa-Wang opened 2 years ago
Hi, if you are using the pillar-based backbone, you should first train the LiDAR-only model TransFusionL
with the config file transfusion_nusc_pillar_L.py
and then you can get fusion_pillar02_R50.pth
using the following script (change the filename accordingly)
img = torch.load('img_backbone.pth', map_location='cpu')
pts = torch.load('transfusionL.pth', map_location='cpu')
new_model = {"state_dict": pts["state_dict"]}
for k,v in img["state_dict"].items():
if 'backbone' in k or 'neck' in k:
new_model["state_dict"]['img_'+k] = v
torch.save(new_model, "fusion_model.pth")
for the image backbone, you can directly use that provided by mmdet3d. Then you can train the TransFusion
for another 6 epochs.
Thanks for your reply. There are some other questions that need your answers. When I learned the source codes, I find the parameter of num_views only used in transfusion_nusc_pillar_L.Cpy、transfusion_nusc_volxel_LC.py、transfusion_waymo_volxel_LC.py, so what is the meaning of num_views? Looking forward to your answer. Thank you!
num_views means the number of images corresponding with one LiDAR frame. So only the LiDAR-camera model has this parameter. And different datasets have different number of cameras (6 for nuScenes, 5 for waymo)
Hi, When i reproduce the paper's algorithm, an error occurs while executing the configuration file "transfusion_nusc_pillar_LC.py", Where can i find the fusion_pillar02_R50.pth file?
fusion_model.pth
Hello, I also encountered the same problem. How do I get the checkpoints file fusion_model.pth after I have completed the blood transfusion-L training? Please guide me.Thank you!
Hi, When i reproduce the paper's algorithm, an error occurs while executing the configuration file "transfusion_nusc_pillar_LC.py", Where can i find the fusion_pillar02_R50.pth file?