-
Thanks a lot for publishing the code for your great work!
I'm currently working on trying to get BEVFusion to run with a custom dataset.
I know that the nuscenes LiDAR points are in the format o…
-
Hi,
I want to know where the pretrained ckpt of swin-t comes from? From ImageNet-1K pretrain or others?
-
Hey there!
We're working on a project at the Technical University of Munich that involves segmentation of driving scenarios images and trying to deal with the problem of bad weather conditions.
Is…
-
After 6 epochs of training under default configs (camera + lidar, det, change batch size to 1), the test result is
```
mAP: 0.4010
mATE: 0.4349
mASE: 0.4707
mAOE: 0.5300
mAVE: 0.5276
mAAE: 0.2…
-
Hi, thanks a lot for sharing the great work. I notice a good pre-trained model for Camere backbone is helpful to get a better effect.
How did you get the pre-trained model "swint-nuimages-pretrained.…
-
Hello, looking at your previous reply, you used 8 GPUs to train the model. When I used two 3090gpus to train the model, the gradient exploded. I suspect that the learning rate was too large. When trai…
-
I have tried both Lidar-only segmentation and fusion segmentation on both mini-dataset and 1/10 full training dataset for a simple explore. I found something strange.
for the result map/mean/iou@max…
-
hello, everyone, i am so confused that when i tried to construct the instance of nusences, i inited the nusences' class by calling the __init__ function in nuscenes.py file , however, i found there…
-
Hi, I run: torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/camera-bev256d2.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth to tra…
-
Hello, when I run: "torchpack dist-run -np 2 python tools/train.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml --model.encoders.camera.backbone.init_cfg.checkpoint…