EEPT-LAB / DipG-Seg

The official implementation of DipG-Seg.
GNU General Public License v3.0
113 stars 16 forks source link

Is it able to run with the robosense lidar(Helios 16)? #6

Closed Sanford-zsh closed 7 months ago

Sanford-zsh commented 7 months ago

Hi~ It was mentioned in the paper that you tested the code with a 16-wire lidar and got good results. I want to use a robosense lidar to run the code. It is also a 16-beam lidar (Helios 16). I have modified some parameters. And I do not know how to modify the compensating vector(const static int cpst[16]). Could you tell me how to modify this parameter? Thanks a lot!

wenhao12111 commented 7 months ago

Hi~ It was mentioned in the paper that you tested the code with a 16-wire lidar and got good results. I want to use a robosense lidar to run the code. It is also a 16-beam lidar (Helios 16). I have modified some parameters. And I do not know how to modify the compensating vector(const static int cpst[16]). Could you tell me how to modify this parameter? Thanks a lot!

May I know your sensor height?

Sanford-zsh commented 7 months ago

Hi~ It was mentioned in the paper that you tested the code with a 16-wire lidar and got good results. I want to use a robosense lidar to run the code. It is also a 16-beam lidar (Helios 16). I have modified some parameters. And I do not know how to modify the compensating vector(const static int cpst[16]). Could you tell me how to modify this parameter? Thanks a lot!

May I know your sensor height?

It is mounted on the robot and has a height of 0.6 meters

wenhao12111 commented 7 months ago

Hi~ It was mentioned in the paper that you tested the code with a 16-wire lidar and got good results. I want to use a robosense lidar to run the code. It is also a 16-beam lidar (Helios 16). I have modified some parameters. And I do not know how to modify the compensating vector(const static int cpst[16]). Could you tell me how to modify this parameter? Thanks a lot!

May I know your sensor height?

It is mounted on the robot and has a height of 0.6 meters

With a rough calculation, you can have a try with a cpst[16] filled with 1, because the vertical resolution is too large to need compensation (under compensating distance is 0.3).

Sanford-zsh commented 7 months ago

I have tried to do what you said. But segmentation doesn't work very well. I have modified some parameters as follows: image In fact, my liDAR rear view is obscured by robotic parts. I don't know if that's going to make a difference to the segmentation result.

Hi~ It was mentioned in the paper that you tested the code with a 16-wire lidar and got good results. I want to use a robosense lidar to run the code. It is also a 16-beam lidar (Helios 16). I have modified some parameters. And I do not know how to modify the compensating vector(const static int cpst[16]). Could you tell me how to modify this parameter? Thanks a lot!

May I know your sensor height?

It is mounted on the robot and has a height of 0.6 meters

With a rough calculation, you can have a try with a cpst[16] filled with 1, because the vertical resolution is too large to need compensation (under compensating distance is 0.3).

wenhao12111 commented 7 months ago

I have tried to do what you said. But segmentation doesn't work very well. I have modified some parameters as follows: image In fact, my liDAR rear view is obscured by robotic parts. I don't know if that's going to make a difference to the segmentation result.

Hi~ It was mentioned in the paper that you tested the code with a 16-wire lidar and got good results. I want to use a robosense lidar to run the code. It is also a 16-beam lidar (Helios 16). I have modified some parameters. And I do not know how to modify the compensating vector(const static int cpst[16]). Could you tell me how to modify this parameter? Thanks a lot!

May I know your sensor height?

It is mounted on the robot and has a height of 0.6 meters

With a rough calculation, you can have a try with a cpst[16] filled with 1, because the vertical resolution is too large to need compensation (under compensating distance is 0.3).

The situation you said is a potential problem for degrading the result. You can try to crop the part of the projected image within the valid angle range. This may work. If not, you can offer a segmentation result (image and pointcloud) to me for further discussion.

Sanford-zsh commented 7 months ago

I have tried to do what you said. But segmentation doesn't work very well. I have modified some parameters as follows: image In fact, my liDAR rear view is obscured by robotic parts. I don't know if that's going to make a difference to the segmentation result.

Hi~ It was mentioned in the paper that you tested the code with a 16-wire lidar and got good results. I want to use a robosense lidar to run the code. It is also a 16-beam lidar (Helios 16). I have modified some parameters. And I do not know how to modify the compensating vector(const static int cpst[16]). Could you tell me how to modify this parameter? Thanks a lot!

May I know your sensor height?

It is mounted on the robot and has a height of 0.6 meters

With a rough calculation, you can have a try with a cpst[16] filled with 1, because the vertical resolution is too large to need compensation (under compensating distance is 0.3).

The situation you said is a potential problem for degrading the result. You can try to crop the part of the projected image within the valid angle range. This may work. If not, you can offer a segmentation result (image and pointcloud) to me for further discussion.

I have tried to crop the part of the projected image within the valid angle range. However, it still does not work very well. I can offer a bag file to you. How can I send the bag file to you?

wenhao12111 commented 7 months ago

I have tried to do what you said. But segmentation doesn't work very well. I have modified some parameters as follows: image In fact, my liDAR rear view is obscured by robotic parts. I don't know if that's going to make a difference to the segmentation result.

Hi~ It was mentioned in the paper that you tested the code with a 16-wire lidar and got good results. I want to use a robosense lidar to run the code. It is also a 16-beam lidar (Helios 16). I have modified some parameters. And I do not know how to modify the compensating vector(const static int cpst[16]). Could you tell me how to modify this parameter? Thanks a lot!

May I know your sensor height?

It is mounted on the robot and has a height of 0.6 meters

With a rough calculation, you can have a try with a cpst[16] filled with 1, because the vertical resolution is too large to need compensation (under compensating distance is 0.3).

The situation you said is a potential problem for degrading the result. You can try to crop the part of the projected image within the valid angle range. This may work. If not, you can offer a segmentation result (image and pointcloud) to me for further discussion.

I have tried to crop the part of the projected image within the valid angle range. However, it still does not work very well. I can offer a bag file to you. How can I send the bag file to you?

You can show me the visualized result first so that I can analyze fast.

Sanford-zsh commented 7 months ago

OK. I made a video, and it shows the result of segmentation. https://github.com/EEPT-LAB/DipG-Seg/assets/56405167/cc850a23-d89a-4e42-b53a-13f109ecb45d

I have tried to do what you said. But segmentation doesn't work very well. I have modified some parameters as follows: image In fact, my liDAR rear view is obscured by robotic parts. I don't know if that's going to make a difference to the segmentation result.

Hi~ It was mentioned in the paper that you tested the code with a 16-wire lidar and got good results. I want to use a robosense lidar to run the code. It is also a 16-beam lidar (Helios 16). I have modified some parameters. And I do not know how to modify the compensating vector(const static int cpst[16]). Could you tell me how to modify this parameter? Thanks a lot!

May I know your sensor height?

It is mounted on the robot and has a height of 0.6 meters

With a rough calculation, you can have a try with a cpst[16] filled with 1, because the vertical resolution is too large to need compensation (under compensating distance is 0.3).

The situation you said is a potential problem for degrading the result. You can try to crop the part of the projected image within the valid angle range. This may work. If not, you can offer a segmentation result (image and pointcloud) to me for further discussion.

I have tried to crop the part of the projected image within the valid angle range. However, it still does not work very well. I can offer a bag file to you. How can I send the bag file to you?

You can show me the visualized result first so that I can analyze fast.

wenhao12111 commented 7 months ago

OK. I made a video, and it shows the result of segmentation. https://github.com/EEPT-LAB/DipG-Seg/assets/56405167/cc850a23-d89a-4e42-b53a-13f109ecb45d

I have tried to do what you said. But segmentation doesn't work very well. I have modified some parameters as follows: image In fact, my liDAR rear view is obscured by robotic parts. I don't know if that's going to make a difference to the segmentation result.

Hi~ It was mentioned in the paper that you tested the code with a 16-wire lidar and got good results. I want to use a robosense lidar to run the code. It is also a 16-beam lidar (Helios 16). I have modified some parameters. And I do not know how to modify the compensating vector(const static int cpst[16]). Could you tell me how to modify this parameter? Thanks a lot!

May I know your sensor height?

It is mounted on the robot and has a height of 0.6 meters

With a rough calculation, you can have a try with a cpst[16] filled with 1, because the vertical resolution is too large to need compensation (under compensating distance is 0.3).

The situation you said is a potential problem for degrading the result. You can try to crop the part of the projected image within the valid angle range. This may work. If not, you can offer a segmentation result (image and pointcloud) to me for further discussion.

I have tried to crop the part of the projected image within the valid angle range. However, it still does not work very well. I can offer a bag file to you. How can I send the bag file to you?

You can show me the visualized result first so that I can analyze fast.

Ok, I have seen your video. First, you can try to fix a potential bug here. You can replace this line with:

if(repaired_img_d.at<float>(r-j,c) - repaired_img_d.at<float>(r,c)<0){
    d_vertical.at<float>(r-1,c) = 0.00001; // to guarantee the slope in this pixel is very large
}
else{
    d_vertical.at<float>(r-1,c) = repaired_img_d.at<float>(r-j,c) - repaired_img_d.at<float>(r,c)+0.001;
}

Besides, you can use 'cv::imshow()' to see your image's quality (high quality means less empty hole). If the quality is poor, this will affect the final result. If poor quality, you can try to transform your point cloud into a smoother image (As known by us, someone who used our method has tried this way to get a better segmentation result).

Finally, if the above ways cannot help you, you can try to adjust the thresholds mentioned in the paper with the guidance of the parameter study and analysis.

Hope the above ways can help.

Sanford-zsh commented 7 months ago

OK. I made a video, and it shows the result of segmentation. https://github.com/EEPT-LAB/DipG-Seg/assets/56405167/cc850a23-d89a-4e42-b53a-13f109ecb45d

I have tried to do what you said. But segmentation doesn't work very well. I have modified some parameters as follows: image In fact, my liDAR rear view is obscured by robotic parts. I don't know if that's going to make a difference to the segmentation result.

Hi~ It was mentioned in the paper that you tested the code with a 16-wire lidar and got good results. I want to use a robosense lidar to run the code. It is also a 16-beam lidar (Helios 16). I have modified some parameters. And I do not know how to modify the compensating vector(const static int cpst[16]). Could you tell me how to modify this parameter? Thanks a lot!

May I know your sensor height?

It is mounted on the robot and has a height of 0.6 meters

With a rough calculation, you can have a try with a cpst[16] filled with 1, because the vertical resolution is too large to need compensation (under compensating distance is 0.3).

The situation you said is a potential problem for degrading the result. You can try to crop the part of the projected image within the valid angle range. This may work. If not, you can offer a segmentation result (image and pointcloud) to me for further discussion.

I have tried to crop the part of the projected image within the valid angle range. However, it still does not work very well. I can offer a bag file to you. How can I send the bag file to you?

You can show me the visualized result first so that I can analyze fast.

Ok, I have seen your video. First, you can try to fix a potential bug here. You can replace this line with:

if(repaired_img_d.at<float>(r-j,c) - repaired_img_d.at<float>(r,c)<0){
    d_vertical.at<float>(r-1,c) = 0.00001; // to guarantee the slope in this pixel is very large
}
else{
    d_vertical.at<float>(r-1,c) = repaired_img_d.at<float>(r-j,c) - repaired_img_d.at<float>(r,c)+0.001;
}

Besides, you can use 'cv::imshow()' to see your image's quality (high quality means less empty hole). If the quality is poor, this will affect the final result. If poor quality, you can try to transform your point cloud into a smoother image (As known by us, someone who used our method has tried this way to get a better segmentation result).

Finally, if the above ways cannot help you, you can try to adjust the thresholds mentioned in the paper with the guidance of the parameter study and analysis.

Hope the above ways can help.

I had modified the code as you said. And it worked better than before obviously. I think there are still some parameters that need to be fine-tuned to make it better. Thank you so much!

https://github.com/EEPT-LAB/DipG-Seg/assets/56405167/c9211fd9-b182-472a-9aca-fc2d2ed73463

wenhao12111 commented 7 months ago

OK. I made a video, and it shows the result of segmentation. https://github.com/EEPT-LAB/DipG-Seg/assets/56405167/cc850a23-d89a-4e42-b53a-13f109ecb45d

I have tried to do what you said. But segmentation doesn't work very well. I have modified some parameters as follows: image In fact, my liDAR rear view is obscured by robotic parts. I don't know if that's going to make a difference to the segmentation result.

Hi~ It was mentioned in the paper that you tested the code with a 16-wire lidar and got good results. I want to use a robosense lidar to run the code. It is also a 16-beam lidar (Helios 16). I have modified some parameters. And I do not know how to modify the compensating vector(const static int cpst[16]). Could you tell me how to modify this parameter? Thanks a lot!

May I know your sensor height?

It is mounted on the robot and has a height of 0.6 meters

With a rough calculation, you can have a try with a cpst[16] filled with 1, because the vertical resolution is too large to need compensation (under compensating distance is 0.3).

The situation you said is a potential problem for degrading the result. You can try to crop the part of the projected image within the valid angle range. This may work. If not, you can offer a segmentation result (image and pointcloud) to me for further discussion.

I have tried to crop the part of the projected image within the valid angle range. However, it still does not work very well. I can offer a bag file to you. How can I send the bag file to you?

You can show me the visualized result first so that I can analyze fast.

Ok, I have seen your video. First, you can try to fix a potential bug here. You can replace this line with:

if(repaired_img_d.at<float>(r-j,c) - repaired_img_d.at<float>(r,c)<0){
    d_vertical.at<float>(r-1,c) = 0.00001; // to guarantee the slope in this pixel is very large
}
else{
    d_vertical.at<float>(r-1,c) = repaired_img_d.at<float>(r-j,c) - repaired_img_d.at<float>(r,c)+0.001;
}

Besides, you can use 'cv::imshow()' to see your image's quality (high quality means less empty hole). If the quality is poor, this will affect the final result. If poor quality, you can try to transform your point cloud into a smoother image (As known by us, someone who used our method has tried this way to get a better segmentation result). Finally, if the above ways cannot help you, you can try to adjust the thresholds mentioned in the paper with the guidance of the parameter study and analysis. Hope the above ways can help.

I had modified the code as you said. And it worked better than before obviously. I think there are still some parameters that need to be fine-tuned to make it better. Thank you so much!

seg_ground1.mp4

So glad the suggestions can help you. Good luck!