Open YBachmann opened 3 months ago
I wrote a simple python node as a proof of concept, which takes in the same parameter.yaml file as the depthimage_to_laserscan
-node and publishes a visualization of the scan_height
and scan_offset
on the RGB- and Depthimage:
In this image a scan_offset
of 0.45
and a scan_height
of 180px
was used (image height = 720
).
Using the visualizations it is really easy to set the parameters such that the right parts of the image (e.g. not the floor/ceiling) are used for the laserscan-generation.
We currently use a robot with multiple tilted cameras, which necessitates carefull parameterization of
scan_height
(andscan_offset
, see PR https://github.com/ros-perception/depthimage_to_laserscan/pull/80) such that e.g. the floor/ceiling in the depthimage is not used for calculation of the laserscan.It would be really useful if there was a way to see an overlay on the depthimage of the rows beeing used for the laserscan. I saw that in the wiki page under section 1.2 there is an overlay of the laserscan onto the depthimage. Was this manually created or is there already a way to visualize this overlay for a live depthimage?
I would be open to implementing an overlay of the
scan_height
(andscan_offset
) for easier parameterization and debugging, either directly inside this package or as a separate tool. But before I do that I wanted to ask if something like this already exists and if other people might find this useful, too.