Closed githubdu closed 8 years ago
Hi @githubdu, the code seems to work properly.
The only problem here is that the robot loaded is not the turtlebot, but the big poll close to the table.
You need to set the parameters in the filter_parameters.yaml
, especially for you these
models:
- model: "robot_description"
tf_prefix: ""
Dear @JimmyDaSilva , thank you for your kindly reply. I am sorry that didn't explain clearly before because I was not expecting any reply. The purpose here is to filter out the rrbot, aka 'the big poll close to the table', which is a robot model from the tutorial of Gazebo. The table and the turtlebot are the 'background'.
The problems are: 1) In the filter.launch, the input is 'depth image' NOT 'depth points', and one of the output is 'depth_filtered image' NOT 'depth_filterd points'. How do I get 'depth_filtered points'? 2) When I check the input 'depth image' and the output 'depth_filtered image' in rviz, they seem to be exactly the same. Are they? 3) The other output from filter.launch is 'urdf_filteredmask'. But when I check it in rviz_, no image was received. Why is that? 4) There supposed to be 4 images in the GUI window, but only the top 2 ones ( the light blue one and the white one) showed up.What are the 4 images and why the down 2 ones (the dark blue one and the black one) didn't show up?
Thank you again.
You should have a look at #8, #9, #10 . I had big problems using this package and finally managed to make the proper changes to make it work.
1) This package uses GPU calculations to improve drastically the computation and hence make prefers to use the image format instead of pointclouds. If you want a Pointcloud you can create it from the depth and color image. Check out my launch file in my fork: https://github.com/JimmyDaSilva/realtime_urdf_filter/blob/current-settings/launch/create_pc.launch
2) depth_filtered
should be filtered image so with no robot on it
3) the mask image has an encoding problem and won't display on RVIZ properly. I will try to correct that someday. It's not really useful any.
4) Hum, I have 6 images, not 4. I yes never got anything on the last two.
Glad to help, Jimmy
Thank you very much.
Is it indispensable that launch kinect_img_bg_store.launch
& kinect_img_bg_sub.launch
before launch create_pc.launch
? Because the kinect of mine is running in Gazebo, if I launch kinect_img_bg_store.launch
& kinect_img_bg_sub.launch
then the background_sub
image is all black and filtered_points
gets nothing. If I don't launch them and modify the create.launch
, remap from="depth_registered/image_rect" to="$(arg kinect_name)/depth_filtered/image_rect"
, filtered_points
cann't display in rviz
but there is data in it.
@githubdu Take a step back and try to understand what's is going on here. My launch files are for my own purpose and used for a specific application. You cannot expect to run them "as is"
I suggested you look at create_pc.launch
so you can see how to launch a nodelet to create a pointcloud from the depth and color images.
So, of course, you need to change the remaps to match the topics published by the realtime_urdf_filter.
Thank you very much. Very sorry for your trouble. I think I got what I need. Tanks again.
@githubdu No worries. Let me know if it works !
It worked ! finally got what I want. @JimmyDaSilva THX .
Glad to know :)
Hi @githubdu, your work looks very nice! Actually I am also building such simulation environment with this package. However, I am really struggling. Could you please share with me your code? Thanks a lot!!
Can you help me with this please? I used filter.launch for a rrbot.urdf filtering, but no filtered image received in rviz. Can somebody help me ?