Closed nag92 closed 2 years ago
As I mentioned in my comment before, the reported point cloud is always in OpenGL's camera Frame. That is, depth along -Z, right along +X, and up along +Y.
There is a fixed rotation between the OpenGL Camera and AMBF Camera. This rotation is RPY (-PI/2, -PI/2, 0) along the fixed x, y, and z axes that rotates a frame from OpenGL convention to AMBF convention. This means that AMBF Camera's +X is along OpenGL's +Z, +Y is along +X and +Z is along +Y respectively. You may use this information to rotate the depth point cloud to be w.r.t. AMBF's camera.
This is no longer the case. The reported depth point cloud is w.r.t to AMBF convention.
here is some sample code that can be used to transform the pointcloud
#include <ros/ros.h>
#include <sensor_msgs/PointCloud2.h>
#include <pcl/point_cloud.h>
#include <pcl_ros/transforms.h>
// PCL specific includes
#include <tf2_ros/transform_broadcaster.h>
#include <sensor_msgs/PointCloud2.h>
#include <pcl_conversions/pcl_conversions.h>
#include <pcl/point_cloud.h>
#include <pcl/point_types.h>
#include <string>
ros::Publisher pub;
tf::StampedTransform transform;
std::string parent;
std::string child;
void cloud_callback (const sensor_msgs::PointCloud2ConstPtr& input)
{
// Create a container for the data.
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud_transformed(new pcl::PointCloud<pcl::PointXYZRGB>);
pcl::PointCloud<pcl::PointXYZRGB>::Ptr temp_cloud(new pcl::PointCloud<pcl::PointXYZRGB>);
pcl::PCLPointCloud2 pcl_pc2;
pcl_conversions::toPCL(*input, pcl_pc2);
pcl::fromPCLPointCloud2(pcl_pc2,*temp_cloud);
pcl_ros::transformPointCloud(*temp_cloud, *cloud_transformed, transform);
sensor_msgs::PointCloud2 cloud_publish;
pcl::toROSMsg(*cloud_transformed,cloud_publish);
cloud_publish.header = input->header;
cloud_publish.header.frame_id = child;
pub.publish(cloud_publish);
}
int main (int argc, char** argv)
{
// Initialize ROS
ROS_INFO("Node started");
ros::init (argc, argv, "point_cloud_transform");
ros::NodeHandle nh;
ros::Rate rate(1000); // ROS Rate at 5Hz
// Create a ROS subscriber for the input point cloud
ros::Subscriber sub = nh.subscribe ("/ambf/env/cameras/depth_camera/DepthData", 1, cloud_callback);
// Create a ROS publisher for the output point cloud
pub = nh.advertise<sensor_msgs::PointCloud2> ("/transformed_cloud", 1);
tf::TransformListener listener;
static tf2_ros::TransformBroadcaster br;
double roll,pitch, yaw;
roll = -0.5*M_PI;
pitch = -0.5*M_PI;
yaw = 0;
parent = "/depth_camera";
child = "/AMBF_camera";
while (ros::ok() )
{
transform.setOrigin( tf::Vector3(0.0, 0.0, 0.0) );
transform.setRotation( tf::Quaternion(roll, pitch, yaw) );
geometry_msgs::TransformStamped transformStamped;
transformStamped.header.stamp = ros::Time::now();
transformStamped.header.frame_id = parent;
transformStamped.child_frame_id = child;
transformStamped.transform.translation.x = 0.0;
transformStamped.transform.translation.y = 0.0;
transformStamped.transform.translation.z = 0.0;
tf2::Quaternion q;
q.setRPY(roll, pitch, yaw);
transformStamped.transform.rotation.x = q.x();
transformStamped.transform.rotation.y = q.y();
transformStamped.transform.rotation.z = q.z();
transformStamped.transform.rotation.w = q.w();
br.sendTransform(transformStamped);
ros::spinOnce();
rate.sleep();
}
return 0;
}
something is very strange is going on with point cloud from the depth camera. The camera view looks good but the cloud is in the wrong plane in the world. The depth cloud frame is in the correct place, I have verified that. The point cloud is in the "depth_camera" frame. Which I am finding by using TF and the state of the body and state of the camera. This however seems to place the point in a rotated position and/or along the wrong axis.
@adnanmunawar said:I believe that the point cloud computation is in the OpenGL’s fixed camera Frame convention, depth along -Z, right along X and up along Y. 10:56 We can orient this to match the camera frame specified by AMBF which can have user specificity x, y and z depending upon the setting of up, look at and location