ros-industrial-attic / yak_ros

Example ROS frontend node for the Yak TSDF package
Apache License 2.0
48 stars 22 forks source link

I can't find any examples. #36

Open bumbastic opened 4 years ago

bumbastic commented 4 years ago

How to setup YAK with e.g an Azure Kinect, and generate a mesh or pointcloud?

schornakj commented 4 years ago

This is very dependent on the specifics of your application and environment, and there are some implicit limitations (e.g. currently only supported on Ubuntu 16.04 and 18.04, so it might not work out-of-the-box on Windows). What OS are you using?

Since yak_ros is a ROS package, an example ROS-based setup might look like this:

One important thing is that you'll need to provide a TF frame for the location of the camera in space. This package is currently written under the assumption that you have a camera attached to a robot arm and that its position is known through the robot's forward kinematics, but I recognize that this doesn't match everyone's use case.

bumbastic commented 4 years ago

Thank you for your answer. I'll try to set it up in Ubuntu 18.04 with Ros melodic. I intend to generate the tf by a linear encoder. The setup will be a sensor in a fixed pose and the part of interest will move linearly on a conveyor. I see one problem though. The part may only be about 2 meters in height, but up to 25 meters in length. Can i use YAK to generate a mesh of this size?

schornakj commented 4 years ago

Your proposed setup sounds reasonable. It will be important to precisely calibrate the pose of the camera relative to the conveyor belt so the surface of the part is reconstructed accurately.

The TSDF algorithm in this package uses a dense voxel volume, so the maximum size part that can be reconstructed as a single mesh depends on the required voxel resolution, the dimensions of the volume containing the part, and the amount of available GPU memory. If you want to scan a volume that is 200cm tall, 200cm wide, and 2500cm long and you use 1cm voxels, the volume will contain 100,000,000 voxels, and at 4 bytes per voxel this will use (very roughly) 400MB of GPU memory, which is quite reasonable. On the other hand if you scan the same volume using 1mm voxels then it will require 1000x more memory, which is probably more challenging to provision.