Field-Robotics-Lab / nps_uw_multibeam_sonar

Multibeam sonar plugin with NVIDIA Cuda library
Apache License 2.0
35 stars 20 forks source link

Variational surface reflectivity #2

Closed woensug-choi closed 3 years ago

woensug-choi commented 3 years ago

Currently, constant surface reflectivity (1E-4) is used. In order to provide variational surface reflectivity according to objects, the following functions could be exploited.

Challenge

@mabelzhang 's advice on details

on't have an exact example for getting the model name per se, but you should be able to get the Visual, and then traverse up the SDF tree to the model and its name. Or you could name your something specific to save traversing the tree, I would think.

One way, I don't know if this is practical for you, is to use the UserCamera::Visual function, which takes screen coordinates and returns the Visual at that pixel location: https://github.com/osrf/gazebo/blob/gazebo11/gazebo/rendering/UserCamera.hh#L185 My guess is you could set your depth camera to the same pose as the user camera, so that they share pixel coordinates.

I believe you can go up the SDF tree from Visual::GetSDF(), in order to get the parent model and its name, but I have not tried: https://github.com/osrf/gazebo/blob/gazebo11/gazebo/rendering/Visual.hh#L135

Or if you name the Visual something specific, you can get the Visual's name via Name(): https://github.com/osrf/gazebo/blob/gazebo11/gazebo/rendering/Visual.hh#L143

Another way is to set up a rendering::SelectionBuffer, as illustrated in the FiducialCameraPlugin: https://github.com/osrf/gazebo/blob/gazebo11/plugins/FiducialCameraPlugin.cc#L195

woensug-choi commented 3 years ago

@mabelzhang Happy new year! How are you? Any findings or updates? I would like to push forward on this issue.

mabelzhang commented 3 years ago

Sorry for the latency! Been quite scattered across different projects. This is the next thing on my list. I cloned the repo and briefly familiarized with the code. I'll dive in next Tuesday.

mabelzhang commented 3 years ago

I pushed a minimal working example SonarDummy to branch mabelzhang/selection_buffer.

It's not meant to be merged. It's just an example to illustrate usage. I removed the CUDA dependency since I don't need to compile with it. I replaced some code so that you can run it with the normal command:

$ roslaunch nps_uw_multibeam_sonar sonar_tank_blueview_p900_nps_multibeam.launch verbose:=true

The example is not publishing to ROS. To see the depth image for reference, In Gazebo, use Window > Topic Visual > select the ImageStamped type that says /gazebo/default/bluevi....

Usage

Basically, in OnNewImageFrame(), you can set pt to whatever pixel location (x, y), and it’ll return the name of the model at that location.

For example, if I set it to the center of a model (code from FiducialCameraPlugin, which populates the list with all the models in the world by default):

   ignition::math::Vector2i pt =
       this->depthCamera->Project(vis->WorldPose().Pos());

The only model with its center in the camera view is the cylinder, so it’ll return the cylinder model’s name:

[Err] [sonar_dummy_plugin.cpp:258] visual: cylinder_target
[Err] [sonar_dummy_plugin.cpp:259] point: 256, 57

If I set y to 0, where the image shows the wall of the tank:

    ignition::math::Vector2i pt = ignition::math::Vector2i(256, 0);

It will return the tank at that pixel location:

[Err] [sonar_dummy_plugin.cpp:264] visual: basement_tank_model
[Err] [sonar_dummy_plugin.cpp:265] point: 256, 0

Given a pixel location, it returns the name of the model at that location.

Notes

The only caveat is that it needs gazebo/rendering/selection_buffer/SelectionBuffer.hh, which by default is not installed with Gazebo (it’s not included in headers in rendering/CMakeLists.txt). Two options to get access to that. One is to compile Gazebo 9 from source, and add header files in selection_buffer to rendering/CMakeLists.txt. If you don’t normally do that, the other option is to copy the header files in selection_buffer into this repo. I’ve gone ahead and added those files in my branch. If the files happen to change upstream, you'll have to pull in the changes manually.

woensug-choi commented 3 years ago

It's a huge step for me! pixel-wise recognition is good enough for sonar calculation! Thank you. I will try it out and ask questions sometime soon.

Is the header included in the later version of the Gazebo 9?

mabelzhang commented 3 years ago

Let me know if there are problems with occlusion. There's a line that checks for occlusion, I don't know what happens if things are occluded. So if occlusion is important in your use case, maybe try a few cases and see if it gets you the right thing.

As of the latest version in the Gazebo 9 repository, the header directory is not installed. I'm asking around to see if we can get them installed in future versions. Will update when I have answers.

mabelzhang commented 3 years ago

Short answer is "probably yes." Long answer is we need to Pimplize the header files before installing them for public use. In English, that translates to, I need to do that and make a PR to gazebo9, and if there are no problems, it'll be in the next release.

Double-checking, gazebo9 is what you use, right? Or gazebo11? We can forward-port and backport if needed, but for starters, I want to target the one you're using.

mabelzhang commented 3 years ago

I started Pimplizing the header files thinking it'd be quick, but I'm getting a seg fault. Before I spend more time looking into what I did wrong, let me know if you want to commit to this path. If so, I'll spend more time on it and open a PR to Gazebo 9 after I fix the seg fault.

woensug-choi commented 3 years ago

I think having headers in this repo would not be a problem at all. So I don't think pimplizing is necessary if it's not simple.

woensug-choi commented 3 years ago

I've been trying to port your forked branch to a branch including other codes with cuda library at https://github.com/Field-Robotics-Lab/nps_uw_multibeam_sonar/tree/reflectivity

This part https://github.com/Field-Robotics-Lab/nps_uw_multibeam_sonar/blob/c5e0ce3c84d40dbf710fbccaec6ff5a56710f858/src/gazebo_ros_multibeam_sonar.cpp#L538-L542 is causing following error,

gzserver: /usr/include/boost/smart_ptr/shared_ptr.hpp:734: typename boost::detail::sp_member_access<T>::type boost::shared_ptr<T>::operator->() const [with T = gazebo::rendering::Scene; typename boost::detail::sp_member_access<T>::type = gazebo::rendering::Scene*]: Assertion `px != 0' failed.
mabelzhang commented 3 years ago

Looks like a null pointer. Are scene and camera_ initialized?

Looking at the C++ file, I think scene is not initialized. It needs to be initialized in Load() like: https://github.com/Field-Robotics-Lab/nps_uw_multibeam_sonar/blob/mabelzhang/selection_buffer/src/sonar_dummy_plugin.cpp#L115-L125

woensug-choi commented 3 years ago

@mabelzhang It worked! But another critical problem is found as I went further. This loop is so slow that the whole plugin frame rate is degraded from 10 Hz to somewhere about 0.2 Hz. It doesn't refresh at all when I include code lines, which are commented out, to assign reflectivity values to the OpenCV matrix. Would there be any methods to make it run faster? I am feeling pessimistic :(

https://github.com/Field-Robotics-Lab/nps_uw_multibeam_sonar/blob/e5c5ee6285cd6e037a50ab66d909fd5fb42b0c43/src/gazebo_ros_multibeam_sonar.cpp#L616-L662

mabelzhang commented 3 years ago

A nested for-loop would be slow. What's the resolution? I don't know CUDA programming, but the GPU is at your disposal if you could parallelize the for-loop on the GPU :)

Do you know where the bottleneck is? You can profile it by timing each section, say, each if-statement, and each for-loop, to see which part is slow.

If you can't use the GPU, maybe using a rougher resolution would help - does it need to be accurate to every pixel? You could add a SDF plugin parameter to let the user specify the resolution and use it on both i and j like how rayskip is now, and see what is an acceptable resolution.

Probably the smartest way is to parallelize it on the GPU though, since it's already required.

woensug-choi commented 3 years ago

I will try GPU parallelization though I am pessimistic about it for this part. Last time, CUDA functions were pretty much restricted to implement their library functions only.

mabelzhang commented 3 years ago

Another idea, since it's a simulation, is that you could segment the RGB or depth image first. Then you only need to run it in the centers of the segments. It might be a quicker idea to try out, if you can find a quick off-the-shelf segmentation implementation. Even superpixels would be a lot less to process than every pixel.

woensug-choi commented 3 years ago

Sorry, I dont understand how the segmentation (distinguishing and grouping the pixels by the objects? still requires every pixel loop) could help. Could you elaborate on the process?

mabelzhang commented 3 years ago

With segmentation, since this is a continuous stream, you wouldn't expect every frame to change a lot from the previous frame. So you don't need to segment at the frequency of every frame - assuming that is why this is slowing things down. For a naive approach, you can just segment every n frames, and assume the centers of those segments won't change too much over a short period of time. Then, in each time step, you only need to run the selection buffer check on, say, 20 pixels at most. You don't expect a lot more than 10 or 20 segments in an underwater environment, right?

I mean, even without segmenting, to optimize this, it shouldn't need to run at every frame, since you don't expect things to suddenly jump out of the frame.

It would still be good to do a profiling to see which block is the slowest.

woensug-choi commented 3 years ago

The segment meaning objects in the world? Sorry, I am not getting how 'you only need to run the selection buffer check on, say, 20 pixels at most.'. 20 pixels meaning the vicinity of the edge line of each object?

Not running every frame should seems to be an acceptable workaround! I will try that first.

Which block are you interested in? https://github.com/Field-Robotics-Lab/nps_uw_multibeam_sonar/blob/e5c5ee6285cd6e037a50ab66d909fd5fb42b0c43/src/gazebo_ros_multibeam_sonar.cpp#L616-L662

mabelzhang commented 3 years ago

Segments can be anything. I'm assuming the segmentation is inaccurate, so it won't be at the object level, but it should be significantly smaller than the number of actual pixels.

For example, fast algorithms like this (I haven't used it, but it claims to be 60 fps on a CPU) https://github.com/Algy/fast-slic and a list of algorithms here https://davidstutz.de/date-list-superpixel-algorithms/

By 20 pixels, I mean the pixels at the center of say 20 superpixels. Or maybe 50, doesn't matter. You can run the selection buffer just on those centers, and that should be a lot fewer than whatever it is currently - if the image is 640x480, with rayskips 10, that would mean it goes on for 64*480=30720 iterations now, which is a lot more than 50 or so.

Then you can apply the reflectivity at the center of each superpixel to all the pixels in that superpixel, since the boundaries of superpixels do correspond to some object boundaries.

For the profiling, I would do one around the entire inner for-loop, one around the OnSelectionClick() call, and one around the first if-statement. Those seem like they might require some time.

woensug-choi commented 3 years ago

It was a code bug. I feel sorry for bothering you on this subject. Your hunch to request a computational time profiling was correct. The major slow down is on my code with the wrong loop variable. I now have a pretty decent computational cost for a frame (something like 0.5 seconds, which is doable for now). I've made a calculation flag to run the calculation whenever the scene is changed and stabled using the max distance value of the depth image. Thank you for the help! I'll proceed to pass on the variables to the sonar calculation tomorrow.

mabelzhang commented 3 years ago

Sounds great! Glad you were able to move forward with this. Sounds like we were able to reach some sort of optimization regardless, which is great! Hopefully the sonar part works out too.

woensug-choi commented 3 years ago

@mabelzhang I can't thank you enough on this subject. It's successfully implemented and merged. Hurray!