Hello,
I am currently starting my efforts using the ensenso camera in ROS. I am running ubuntu 16.04, i have both the SDK and the ueyeeth installed and am connecting to a N35 stereo camera.
I can manage retrieving data from the camera and visualize this in rviz. The ensenso_optical_frame, in which everything should be represented is not automatically added. I was able to see the raw camera images but not the pointcloud. Therefore i added that frame myself. However the point cloud that results from this is poor to say the least.
Could this be related to the frame that I created to the camera? The tutorial is not very comprehensive on this matter.
Hello, I am currently starting my efforts using the ensenso camera in ROS. I am running ubuntu 16.04, i have both the SDK and the ueyeeth installed and am connecting to a N35 stereo camera. I can manage retrieving data from the camera and visualize this in rviz. The ensenso_optical_frame, in which everything should be represented is not automatically added. I was able to see the raw camera images but not the pointcloud. Therefore i added that frame myself. However the point cloud that results from this is poor to say the least. Could this be related to the frame that I created to the camera? The tutorial is not very comprehensive on this matter.
kind regards