nubot-nudt / InsMOS

[IROS23] InsMOS: Instance-Aware Moving Object Segmentation in LiDAR Data
MIT License
124 stars 5 forks source link

Questions on applying to the Kitti360 dataset #4

Closed hwan0806 closed 11 months ago

hwan0806 commented 12 months ago

First of all, thank you for open-sourcing an awesome paper and codes. I think your group is making a great contribution to the development of the academic community and I always appreciate about it :)

I checked the validation results on KITTI dataset as you shared, and got great results, but when I applied it to KITTI360 dataset, I got relatively poor results, which made me ask a question.

I utilized N_10_t_0.1_odom.ckpt and applied it to KITTI360's 2013_05_28_drive_0009_sync set. In particular, the instance segmentation did not perform well and tended to segment static instances as dynamic objects. If you have ever tested this model to KITTI-360 dataset, could you share one of them? If not, could you share any tips on tuning the parameters?

The data format of KITTI-360 is slightly different from that of KITTI, so preprocessing step is needed to convert it to KITTI format, but there may be some mistakes I made in that process. I can share the data I utilized if needed.

Thank you!

neng-wang commented 12 months ago

Hi @hwan0806. Thank you for your interest in our work. Unfortunately, I haven’t tested our model on the KITTI-360 dataset. However, I will test it based on your request, and if I have any results, I will share you.

hwan0806 commented 12 months ago

I look forward to hearing back about the results. Thank you for your efforts!

neng-wang commented 11 months ago

Hi, @hwan0806! Sorry for my late reply. Today, I tested our method in KITTI360's 2013_05_28_drive_0009_sync set, and I get good results similar to the KITTI validation set.You can check the predictions from here, code:pnhr. I guess you may have made some incorrect conversions during the process of converting data to KITTI format. Actually, I did not convert the data to KITTI format. I just change the directory structure like this. screenshot To achieve good results, you may need to focus on the following issues:

  1. How to get poses.txt? From the kitti-360 website, we notice that not all velodyne frame have poses. screenshot_1 Due to the discontinuity of velodyne poses, we cannot use the officially provided poses for inference. So I use the kissicp to generate poses. You can check the generated poses from here,code: pnhr
  2. How to get poses from kissicp? You can follow the README.md in kissicp repo to install it, and then run kitti.ipynb to generate poses. Note that before running, you must have the calib.txt and time.txt files included. You can directly generate the time.txt file using the following code: with open('time.txt', 'w') as file: for i in range(0, 14056): time = i * 0.1 file.write(str(time) + '\n') As for calib.txt file, you are free to copy a calib.txt file from the KITTI dataset and modify Tr as follows: [1,0,0,0,0,1,0,0,0,0,1,0]. By doing this, you can directly obtain the LiDAR poses.
  3. After obtaining the poses, you can directly run our algorithm without modifying any preprocessing code.
hwan0806 commented 11 months ago

Thank you, @neng-wang, for your specific and kind explanation! Before reaching out to you, I had synchronized the scan data with the pose data, but I now realize that was an invalid approach for this model. Following your advice, I used KISS-ICP to extract the pose information and got fantastic results as shown below. insmos

Thank you for your effort! I'll close this issue.