iralabdisco / ira_laser_tools

All laser type assemblers and manipulators.
BSD 3-Clause "New" or "Revised" License
201 stars 215 forks source link

laserscan_visualizer - configuring range not working #26

Open didrif14 opened 4 years ago

didrif14 commented 4 years ago

Hello.

I am trying to use the laserscan_visualizer to convert a pointcloud to a laserscan. The pointcloud comes from a gazebo kinect plugin. I can visualize the laserscan in rviz, however, the laserscan is only shown approximately 2 meters infront of the robot, even though i have set the range to 25 meters.

I have removed the TF statements, because my TF is published though robot-state-publisher. This is my launch file:

`<?xml version="1.0" encoding="utf-8"?>

`
trigal commented 4 years ago

do you have a bag with some data?

didrif14 commented 4 years ago

do you have a bag with some data?

Thanks for your quick response! I recorded the laserscan and uploaded it here: https://drive.google.com/file/d/1zpugHnILWYGGUH686pbKak7r3IzMwbq7/view?usp=sharing

Altough it has some errors stating: `For frame [/frnt_cam_p2l]: Fixed Frame [map] does not exist' Here is my tf-tree: https://drive.google.com/file/d/1yToZ48NtmCslbveEh5wmPaDoPBwmPaOz/view?usp=sharing

I uploaded a short video which shows the pointcloud, and how the laserscan is visualized. https://youtu.be/gBR92UYLipE

trigal commented 4 years ago

It has been a very long time since the last time I used this tool, we mainly use the laser merger one. However, let's see, the laser data in your bag (that is that output from the virtualizer I suppose) is in the frame frnt_cam_p2l. The original point cloud points in which frame are they in? I suppose laser_link, am I correct? Could you upload a bag with TF as the point cloud as well?

From the bagfile: header: seq: 1457 stamp: secs: 161 nsecs: 825000000 frame_id: /frnt_cam_p2l angle_min: -3.1400001049 angle_max: 3.1400001049 angle_increment: 0.00579999992624 time_increment: 0.0 scan_time: 0.0333333313465 range_min: 0.449999988079 range_max: 25.0

didrif14 commented 4 years ago

My plan is to convert the pointclouds to laserscan, then merge four laserscans with the multi-merger.

I realize that I might have configured the launch file incorrect. The pointcloud comes from the frnt_cam_opt frame. The frame where I want the new laser scan (frnt_scan) is frnt_cam_p2l. Ultimately I want to merge my four laserscan to the laser_lnk frame.

I have uploaded two .bag files, one contains tf and laserscan, the other contains tf and pointcloud.

Tf and laser: https://drive.google.com/file/d/1M0P72A9AXiJY7WzBDJGQH_OFTCz6JVJh/view?usp=sharing

Tf and pointcloud: https://drive.google.com/file/d/1ajPpfPThBCS3gpvTRVtYUdZl62lSfC8G/view?usp=sharing

The pointcloud .bag file is very large, maybe I have to reduce the resolution from the kinect plugin?

trigal commented 4 years ago

from your bag file (the tf points) it seems that you're publishing the transformation between these two frames always at the same time, to be clear, not with row::time::now() I'm not sure if this might be the issue, but could you please check? TF has a standard time-window with 10 secs..

so your configuration is 4 kinects on the robot and you want to simulate 4 lasers and then merge them?

rosrun tf tf_echo /frnt_cam_opt /frnt_cam_p2l
At time 0.000
- Translation: [0.000, 0.000, 0.000]
- Rotation: in Quaternion [0.707, 0.000, 0.000, 0.707]
            in RPY (radian) [1.571, -0.000, 0.000]
            in RPY (degree) [90.000, -0.000, 0.000]
At time 0.000
- Translation: [0.000, 0.000, 0.000]
- Rotation: in Quaternion [0.707, 0.000, 0.000, 0.707]
            in RPY (radian) [1.571, -0.000, 0.000]
            in RPY (degree) [90.000, -0.000, 0.000]
header: 
  seq: 1672
  stamp: 
    secs: 84
    nsecs: 239000000
  frame_id: frnt_cam_opt
height: 512
width: 512
fields: 
didrif14 commented 4 years ago

from your bag file (the tf points) it seems that you're publishing the transformation between these two frames always at the same time, to be clear, not with row::time::now()

I am not quite sure how to do this, as the tf is being published by the /state_publisher

so your configuration is 4 kinects on the robot and you want to simulate 4 lasers and then merge them?

Correct. I have managed to use a depth image to laserscan package, and used your merger to combine them.

trigal commented 4 years ago

I see, but if you open your tf-tree you will notice that all the Kinects have very old transforms (105secs ... I suppose just the first one, at the time you "power on" your system) and no more. Instead, frw_link, rlw_link, and rrw_link have updated transforms. This might be an issue and, from what you have told me, maybe the problems arise from the depth-to-laserscan package configuration. Your state publisher is not updating all the transforms, I don't know why.

didrif14 commented 4 years ago

I have noticed the same thing, however, I assumed it was correct because these are fixed frames with respect to the base_lnk. Whereas the rrw rlw frw flw links are wheels and are constantly turning. But if that is not the case, I should probably find the reason for this.

trigal commented 4 years ago

But just to understand if the package is working properly, can you directly use the data from the kinect without all the system and transformations? I have vague memories, but this should have been exactly what we did when we wrote this package, but is was 6 ore more years ago... don’t remember any details