-
I used the command from the script provided to get the weights for Swin-T
`wget https://bevfusion.mit.edu/files/pretrained_updated/swint-nuimages-pretrained.pth
`
However I get a 404 not found err…
-
I want to know the specific configuration of 512x512 size training, --num_bucket_per_side should be 512, 512?
-
This is the training code:
`torchpack dist-run -np 1 python tools/train.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml --model.encoders.camera.backbone.init_cfg.c…
-
As described README.MD
`./tools/download_pretrained.sh`
It seems there are no resources to download, I open the download_pretrained.sh,
```
mkdir pretrained &&
cd pretrained &&
wget htt…
-
When training, I run: torchpack dist-run -np 2 python tools/train.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml --model.encoders.camera.backbone.init_cfg.checkpoi…
-
![image](https://github.com/mit-han-lab/bevfusion/assets/46307359/20ca3cd6-9850-4a81-8533-0b0a7cda9782)
Hi guys,
I would like to know how the radar data is used.
![image](https://github.com/mit-han…
-
**When I run the command:** torchpack dist-run -np 4 python tools/train.py \
configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml \
--model.encoders.camera.backbone.ini…
-
Thanks for your excellent job!
We have encountered some problems during reproduction.
We use 3 3060 12G GPUs, so we adjust:
the batchsize to 1 (in nuscenes/default.yaml samplers_per_gpu:1 worker…
-
bevfusion) wsd2@wsd2:~/bevfusion$ torchpack dist-run -np 1 python tools/train.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml --model.encoders.camera.backbone.init_…
lymcs updated
6 months ago
-
Hello,
I'm getting this error while training using nuscenes dataset and torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml --model…