drprojects / DeepViewAgg

[CVPR'22 Best Paper Finalist] Official PyTorch implementation of the method presented in "Learning Multi-View Aggregation In the Wild for Large-Scale 3D Semantic Segmentation"
Other
222 stars 24 forks source link

How to produce KITTI-360 `test` predictions ? #30

Closed Tommydied closed 1 year ago

Tommydied commented 1 year ago

My current input is:

I_GPU=0

DATA_ROOT="./directory"                        # set your dataset root directory, where the data was/will be downloaded
EXP_NAME="My_awesome_KITTI-360_experiment"                              # whatever suits your needs
TASK="segmentation"
MODELS_CONFIG="${TASK}/sparseconv3d"                                    # family of 3D-only models using the sparseconv3d backbone
MODEL_NAME="Res16UNet34"                                                # specific model name
DATASET_CONFIG="${TASK}/kitti360-sparse"
TRAINING="kitti360_benchmark/sparseconv3d"                              # training configuration for discriminative learning rate on the model
EPOCHS=60
CYLINDERS_PER_EPOCH=12000                                               # roughly speaking, 40 cylinders per window
TRAINVAL=False                                                          # True to train on Train+Val (eg before submission)
MINI=False                                                              # True to train on mini version of KITTI-360 (eg to debug)
BATCH_SIZE=6                                             # 4 fits in a 32G V100. Can be increased at inference time, of course
WORKERS=0                                                         # adapt to your machine
BASE_LR=0.1                                                             # initial learning rate
LR_SCHEDULER='multi_step_kitti360' # learning rate scheduler for 60 epochs
EVAL_FREQUENCY=5                                                        # frequency at which metrics will be computed on Val. The less the faster the training but the less points on your validation curves
SUBMISSION=False                                                        # True if you want to generate files for a submission to the KITTI-360 3D semantic segmentation benchmark
CHECKPOINT_DIR="/home/Deep"                                                       # optional path to an already-existing checkpoint. If provided, the training will resume where it was left
export SPARSE_BACKEND=torchsparse

The code only ran "train" and "val" in the end. I would like to inquire about how to execute the "test" phase. Is there something missing or incorrect in my input?

drprojects commented 1 year ago

Hi, this is the expected behavior.

Indeed, the KITTI-360 dataset does have a 'test' set but it is a dataset with held-out labels for predictions submission to the official KITTI-360 benchmark server. Said otherwise, there are no labels on the test set and if your intention was to measure the performance on the test set, you would need to submit your results to the KITTI-360 server.

By default, the computation of predictions on the test set is switched off, to avoid unnecessary computation. If you want to activate it, you can set:

tracker_options:
  full_res: True
  make_submission: True

in your conf/config/train.yaml. Setting ful_res: True will trigger full-resolution predictions computation on the trani, val and test sets at the very end of training. Meanwhile make_submission: True indicates to prepare and store the 'test' predictions for submission to the KITTI-360 server. These predictions will be saved in your data_root/submissions directory by default.

To avoid re-training again just for that, I provided a notebook for dataset evaluation from a pretrained checkpoint. Please have a look at the KITTI-360 inference notebook where all you should have to do is set:

split = 'test'

Best,

Damien