-
Hi, when I use the following command for evaluation:
# Evaluation
sh . /tools/dist_test.sh . /configs/MSMDFusion_nusc_voxel_LC.py $ckpt_path$ 2 --eval bbox
The results obtained, are consistent with…
-
Dear author,
May I ask if only 500/125 training samples will be used when retraining PointPillars and PointRCNN with 500frames and 125frames? Or 500/125 annotated samples plus 3712-500/125 unannotate…
-
Hi, I have the following data
- LiDAR files (`*.las`)
- equirectangular images of camera along with heading values
- Car location data
How can I preprocess my data into trainable format like K…
-
Use the NeonTreeCrowns annotated dataset described in the comment below.
-
Hello, thank you for your great work.
I'm currently working on a research project related to Moving Object Detection (MOS) using **FMCW Scanning Radar.**
However, I'm facing a challenge in annotatin…
-
I wonder where can I specify the Oxford Robotcar DATAPATH during training and validation.
`python ../tools/train.py --config ../configs/train_config.yaml`
`python ../tools/eval.py --config ../config…
-
Hi, I accidentally notice that there are both 'ann_infos' [here](https://github.com/HuangJunJie2017/BEVDet/blob/90369c3e0b636166b3e292851f53293453eff75f/mmdet3d/datasets/nuscenes_dataset.py#L239) and …
-
I would like to create a BEP to store the audio and/or video recordings of behaving subjects.
While this would obviously be problematic for sharing human data, it would be useful to internal human …
-
Hi waymo research,
I have some questions regarding the keypoint data published with the latest releases (v1.3.2).
1) In your paper (https://arxiv.org/pdf/2112.12141.pdf), do you align the root j…
-
Hello. When I preprocess the nuscenes training data using the instructions you provided, everything is right: python tools/create_data.py nuscenes_data_prep --root_path=/root/autodl-tmp/nuscenes/train…