Closed mktk1117 closed 6 months ago
@mktk1117 Great! I saw that there is some kind of semantic sensor package in the code. Does that mean, the software can then extract the colors (rbg) from the camera image and match / fuse it with the corresponding cell from the elevation map? or how does it work? Thanks.
Yes, we now added the functionality of raycasting the single image to the elevation map cells. We'll have a paper explaining how it works. We'll add the link to the paper once it's ready!
This is cool stuff!
Some assumptions break the point cloud generation for us (we use realsense cameras that use mm instead of m for their depth images) in pointcloud_node.py currently we are just scaling the depth image but this should perhaps be handled by a parameter?
Also on line 251 I'm assuming the 8 should be replaced by a parameter instead of being hardcoded?
Also due to ros_numpy not receiving an update for quite a while, it breaks with newer numpy versions as the np.float
attribute no longer exists. A hack is to redefine it after importing numpy np.float=np.float32
I can't add these changes to the pull request but here are some initial bugs/findings
Hi Becktor, thanks a lot for your reports!
Large update of adding functionality of having multi-modal layers such as color, semantic or feature layers.