Open bumbastic opened 4 years ago
This is very dependent on the specifics of your application and environment, and there are some implicit limitations (e.g. currently only supported on Ubuntu 16.04 and 18.04, so it might not work out-of-the-box on Windows). What OS are you using?
Since yak_ros
is a ROS package, an example ROS-based setup might look like this:
UINT16
depth image or PointCloud2 point clouds it should work OK.yak_ros/launch/demo.launch
as an example for how to set up the yak_ros
node to subscribe to images from the depth camera. At minimum you'll need to change the image topic remapping so the node subscribes to the depth images published by the camera driver, as well as the camera matrix and image dimensions to match the intrinsic calibration of the depth image. Here's the important part:
https://github.com/ros-industrial/yak_ros/blob/d3166192d679b2833c7d5ab6afbfd1cde05b834f/yak_ros/launch/demo.launch#L10-L22One important thing is that you'll need to provide a TF frame for the location of the camera in space. This package is currently written under the assumption that you have a camera attached to a robot arm and that its position is known through the robot's forward kinematics, but I recognize that this doesn't match everyone's use case.
Thank you for your answer. I'll try to set it up in Ubuntu 18.04 with Ros melodic. I intend to generate the tf by a linear encoder. The setup will be a sensor in a fixed pose and the part of interest will move linearly on a conveyor. I see one problem though. The part may only be about 2 meters in height, but up to 25 meters in length. Can i use YAK to generate a mesh of this size?
Your proposed setup sounds reasonable. It will be important to precisely calibrate the pose of the camera relative to the conveyor belt so the surface of the part is reconstructed accurately.
The TSDF algorithm in this package uses a dense voxel volume, so the maximum size part that can be reconstructed as a single mesh depends on the required voxel resolution, the dimensions of the volume containing the part, and the amount of available GPU memory. If you want to scan a volume that is 200cm tall, 200cm wide, and 2500cm long and you use 1cm voxels, the volume will contain 100,000,000 voxels, and at 4 bytes per voxel this will use (very roughly) 400MB of GPU memory, which is quite reasonable. On the other hand if you scan the same volume using 1mm voxels then it will require 1000x more memory, which is probably more challenging to provision.
How to setup YAK with e.g an Azure Kinect, and generate a mesh or pointcloud?