Open gavanderhoorn opened 4 years ago
Definitely possible and desirable. This would require some modifications to Yak to associate camera intrinsics with each image rather than loading them from a class-member parameter variable. Ideally this change would be designed to allow data from multiple cameras to be integrated into the volume simultaneously.
Another improvement would be using the distortion coefficients provided on the camera_info
topic in the D
field. Right now Yak assumes that all images are rectified and doesn't do distortion correction. This would require writing a CUDA implementation of a distortion correction function, which isn't especially complicated.
Since these features would require changes to Yak I'll make an enhancement issue in the Yak repo as well.
This would require some modifications to Yak to associate camera intrinsics with each image rather than loading them from a class-member parameter variable. Ideally this change would be designed to allow data from multiple cameras to be integrated into the volume simultaneously.
This would certainly be nice, but I was thinking of a first simple enhancement using ros::topic::waitForMessage(..) to retrieve a single CameraInfo
and use it for the entire run.
I think that would be really straightforward. If there isn't a camera_info
topic available would you fall back to intrinsics loaded as ROS parameters?
Yes, that would make sense to maintain current behaviour.
The current implementation of
yak_ros
requires users to provide the relevant camera intrinsics via thecamera_matrix
ROS parameter.I'm wondering whether it would be possible to instead retrieve a message from the
camera_info
topic from the same namespace as the depth image.Afaict (but I'm not a "vision guy"), the
K
field of the message contains the requiredfx
,fy
,cx
andcy
. It also contains the resolution (width
andheight
fields).