PRBonn / OverlapNet

OverlapNet - Loop Closing for 3D LiDAR-based SLAM (chen2020rss)
MIT License
662 stars 115 forks source link

Some confusion about normalize_data.py #27

Closed shenhai911 closed 2 years ago

shenhai911 commented 2 years ago

Dear author, during generating training set and verification set, there are three versions of normalize_data.py, which are as follows,

  1. in tensorflow_version overlapNet: OverlapNet/src/utils/normalize_data.py

Selection_029

  1. in pytorch_version overlapNet: OverlapNet/tools/utils/normalize_data.py

Selection_031

  1. in overlap_localization: overlap_localization/src/prepare_training/normalize_data.py Selection_030

I want to know which version you are using in the paper to keep different bins the same amount of samples.

It would be great if you could explain more. Thank you very much.

Chen-Xieyuanli commented 2 years ago

Hi @shenhai911, thanks for using our code!

The original OverlapNet paper used the first version.

The PyTorch version was implemented by other users. As far as I know, they found it also works well.

Different amounts of data will influence the training time, and the overlap distributions vary from different datasets, where the normalization can be adjusted accordingly.

shenhai911 commented 2 years ago

Dear author, Thank you for your quick reply. Now I have two main questions to ask.

  1. When I use the first version of the code to generate the ground-truth corresponding to sequence 08, the following error occurred when dividing the training set and the validation set:

Traceback (most recent call last): File "src/generate_training_data/generate_ground_truth_single_sequence.py", line 110, in <module> train_data, validation_data = split_train_val(dist_norm_data) File "src/generate_training_data/../../src/utils/split_train_val.py", line 21, in split_train_val train_set, test_set = train_test_split(ground_truth_mapping, test_size=test_size) File "/home/xxx/softwares/anaconda3/envs/OverlapNet_env/lib/python3.7/site-packages/sklearn/model_selection/_split.py", line 2423, in train_test_split n_samples, test_size, train_size, default_test_size=0.25 File "/home/xxx/softwares/anaconda3/envs/OverlapNet_env/lib/python3.7/site-packages/sklearn/model_selection/_split.py", line 2046, in _validate_shuffle_split "(0, 1) range".format(test_size, n_samples) ValueError: test_size=0 should be either positive and smaller than the number of samples 2 or a float in the (0, 1) range

And I paste the related codes as follows:

Selection_034

Selection_035

I printed the histogram statistical results and the normalized results, and pasted them as follows: Selection_036

I don't know if it's because of my mistake, or if it's a bug in itself. I wonder if you have encountered this problem.

In addition, in your paper, you take the sequence 03-10 as the training set and the sequence 11 as the validation set. Selection_033

Therefore, I want to know, if I take the sequence 11 as the validation set, then, when generating the ground-truth, does the sequence 03-10 need to be divided into training set and validation set?

It would be great if you could explain more. Thank you very much.

Chen-Xieyuanli commented 2 years ago

Hey @shenhai911, for the OverlapNet training in the final version, we use the validation splits of all sequences instead of only one sequence to guide a more general distribution. We however still use only sequence 02 for tuning the parameters of our SLAM algorithm, and therefore still call it validation sequence. We didn't use any data from the test set, seq 11-21.

For the distribution issue of seq08, it seems the overlap values are wrong and could be caused by the inaccurate poses. I guess you are using the ground truth poses provided by KITTI, which actually has lots of noise. You may use the poses we provided in the SemanticKITTI, where we use our SLAM algorithm to refine the poses and you may get better results. download link