EEPT-LAB / DipG-Seg

The official implementation of DipG-Seg.
GNU General Public License v3.0
113 stars 16 forks source link

Is this able to run on custom pcd data? #4

Closed wcyjerry closed 8 months ago

wcyjerry commented 8 months ago

Thanks and congrads to u guys. I'm not familiar with lidar, but I do need a method to seg ground on point clouds. If your repo could run on point clouds data instead of org lidar data like (64 * 870)?

wenhao12111 commented 8 months ago

Thanks and congrads to u guys. I'm not familiar with lidar, but I do need a method to seg ground on point clouds. If your repo could run on point clouds data instead of org lidar data like (64 * 870)?

Yes, it can run on raw lidar data, but I only provide the lidar config of Velodyne HDL-64E and Velodyne HDL32E. Thus, if you want to use your own lidar data, please find the lidar config to project it to an image. If you have difficulties with the projection, you can offer your lidar type to me, and maybe I can help you somewhat.

wcyjerry commented 8 months ago

@wenhao12111 Hi, thanks for reply, I have a pcd file, which turns out a (230400,3) array , is this enough for runing your method? and another question , the metric mentioned in your paper, like patchworkpp is different from its original paper, in your paper the F1 is 92.49 and in their paper is 96.51, why?

wenhao12111 commented 8 months ago

@wenhao12111 Hi, thanks for reply, I have a pcd file, which turns out a (230400,3) array , is this enough for runing your method? and another question , the metric mentioned in your paper, like patchworkpp is different from its original paper, in your paper the F1 is 92.49 and in their paper is 96.51, why?

You need to know your lidar parameters, like elevation angle scope, number of laser beams. For patchworkkpp, as mentioned in their paper, it ignores the label of vegetation in the evaluation. You can refer to their paper for details. Also, the details of the differences between the experimental setups can found in our paper and their paper.

wcyjerry commented 8 months ago

@wenhao12111 Hi, thanks for reply, I have a pcd file, which turns out a (230400,3) array , is this enough for runing your method? and another question , the metric mentioned in your paper, like patchworkpp is different from its original paper, in your paper the F1 is 92.49 and in their paper is 96.51, why?

You need to know your lidar parameters, like elevation angle scope, number of laser beams. For patchworkkpp, as mentioned in their paper, it ignores the label of vegetation in the evaluation. You can refer to their paper for details. Also, the details of the differences between the experimental setups can found in our paper and their paper.

Yes,elevation angle scope, number of laser beams, I got them all. Other needs? Because, When I run patchworkpp, I just throw total points and get a result. Another question, How you guys eval learning based methods, in their paper, they segment all kinds of classes which classes you defined as ground?

wenhao12111 commented 8 months ago

@wenhao12111 Hi, thanks for reply, I have a pcd file, which turns out a (230400,3) array , is this enough for runing your method? and another question , the metric mentioned in your paper, like patchworkpp is different from its original paper, in your paper the F1 is 92.49 and in their paper is 96.51, why?

You need to know your lidar parameters, like elevation angle scope, number of laser beams. For patchworkkpp, as mentioned in their paper, it ignores the label of vegetation in the evaluation. You can refer to their paper for details. Also, the details of the differences between the experimental setups can found in our paper and their paper.

Yes,elevation angle scope, number of laser beams, I got them all. Other needs? Because, When I run patchworkpp, I just throw total points and get a result. Another question, How you guys eval learning based methods, in their paper, they segment all kinds of classes which classes you defined as ground?

After getting these parameters, you can refer to [projection_param.h] (https://github.com/EEPT-LAB/DipG-Seg/blob/main/src/include/projection_param.h) to adapt to your own lidar. The instructions are commented in this file. Note that, these parameters are used for projecting the point cloud into images. For the other question, the label of gound includes road, parking, sidewalk, lane-marking, terrain and other-ground. To evaluate the learning-based methods, we first remap the above ground labels to one class and the other labels to non-ground by referring to this official config file, then we run the learning-based models to get the confusion matrix to calculate the result.

wcyjerry commented 8 months ago

@wenhao12111 Thanks so so so much, it's very generous of you to explain this so well. Feel so sry to bother you, because I'm really not good at this area and I just need a good tool to process my lidar point cloud for my down-steam task where I'm familiar with.

wenhao12111 commented 8 months ago

@wenhao12111 Thanks so so so much, it's very generous of you to explain this so well. Feel so sry to bother you, because I'm really not good at this area and I just need a good tool to process my lidar point cloud for my down-steam task where I'm familiar with.

You are welcome. It's my pleasure that the response can help you. So, I will close this issue.