Closed giacomodabisias closed 9 years ago
Hi, first you have to register the depth image and then you can upscale it. you also need to give a translation and rotation.
Thanks, so the order should be: new.... init... registerDepth... depthToRGBResolution... right? What do you mean with "give a translation and rotation?" Thanks!
OK I understood now what you mean. I used the parameters from the calibration you provided but I still get a very bad result
This is the new code:
typename pcl::PointCloud<PointT>::Ptr
GetCloud(){
frames_ = *GetRawFrames();
rgb_ = frames_[libfreenect2::Frame::Color];
depth_ = frames_[libfreenect2::Frame::Depth];
tmp_depth_ = cv::Mat(depth_->height, depth_->width, CV_32FC1, depth_->data);
tmp_rgb_ = cv::Mat(rgb_->height, rgb_->width, CV_8UC3, rgb_->data);
typename pcl::PointCloud<PointT>::Ptr cloud(new pcl::PointCloud<PointT>());
cloud->height = tmp_rgb_.rows;
cloud->width = tmp_rgb_.cols;
cloud->is_dense = false;
cloud->points.resize(cloud->height * cloud->width);
if(depthReg_ == 0){
cameraMatrixColor_ = (cv::Mat_<double>(3,3) <<
1.0607072507083330e+03, 0., 9.5635447181548398e+02,
0., 1.0586083263054650e+03, 5.1897844298824486e+02,
0., 0., 1. );
cameraMatrixDepth_ = (cv::Mat_<double>(3,3) <<
3.6753450559472907e+02, 0., 2.4449142359870606e+02,
0.,3.6659938826108794e+02, 2.0783007950245891e+02,
0., 0., 1. );
createLookup(tmp_rgb_.cols, tmp_rgb_.rows);
rotation_ = (cv::Mat_<double>(3,3) <<
9.9983695759005575e-01, -1.6205694274052811e-02,-7.9645282444769407e-03,
1.6266694212988934e-02, 9.9983838631845712e-01, 7.6547961099503467e-03,
7.8391897822575607e-03, -7.7831045990485416e-03, 9.9993898333166209e-01 );
translation_ = (cv::Mat_<double>(1,3) <<
-5.1927840124258939e-02, -4.5307585220976776e-04, 7.0571985343338605e-05 );
depthReg_ = new DepthRegistrationCPU(cv::Size(tmp_rgb_.cols, tmp_rgb_.rows),
cv::Size(tmp_depth_.cols, tmp_depth_.rows),
cv::Size(tmp_depth_.cols, tmp_depth_.rows),
0.5f, 20.0f);
depthReg_->init(cameraMatrixColor_, cameraMatrixDepth_,
rotation_, translation_,
cv::Mat::zeros(tmp_depth_.rows, tmp_depth_.cols, CV_32F),
cv::Mat::zeros(tmp_depth_.rows, tmp_depth_.cols, CV_32F));
}
depthReg_->registerDepth(tmp_depth_, registered_ );
depthReg_->depthToRGBResolution(registered_, scaled_);
createCloud(scaled_, tmp_rgb_, cloud);
return cloud;
}
Any idea?
I don't know. looks like errors from interpolated depth points.
Ok, thanks. Still having problems. Just to be sure I understood correctly. Steps to get a registered point cloud: 1)Create a Lookup Table 2) Read intrinsic and extrinsic parameters 3) create a new DepthRegistrationObject (CPU or OpenCL) 4) initialize the object with the intrinsic and extrinsic parameters 5) register the depth image (which is still low res) using registerDepth() 6) upscale the depth image to the rgb resolution using depthToRGBResolution() 7) create cloud using the upscaled depth and the rgb mat.
Am i correct? So that in case I can focus on the single parts. I have seen that even just passing to createCloud directly the depth and the rgb of the camera gives bad results, but that function just uses the lookuptable created with createLookup(). Just visualizing the depth cloud (setting all points to the same color) shows problems. I presume that there is some fault in my camera parameters. The parameters are the following:
color: fx 1081.37 fy 1081.37 cx 959.5 cy 539.5 ir fx 365.478 fy 365.478 cx 258.937 cy 207.594
rotation_ 9.9983695759005575e-01, -1.6205694274052811e-02,-7.9645282444769407e-03, 1.6266694212988934e-02, 9.9983838631845712e-01, 7.6547961099503467e-03, 7.8391897822575607e-03, -7.7831045990485416e-03, 9.9993898333166209e-01
translation -5.1927840124258939e-02, -4.5307585220976776e-04, 7.0571985343338605e-05
Could you just confirm that you are using the same, or almost the same?
Thanks
Hi, I also followed your conversation on the PCL mailinglist. The opencl implementation is not limited to NVIDIA, it work also with AMD and on macs with INTEL GPUs.
in theory you just have to do the following:
But since the resolutions are different I did some extra steps:
cv::initUndistortRectifyMap
. This maps give for each pixel of a color image the corresponding coordinates of a depth image. This map also takes into account the distortion coefficients.In the https://github.com/code-iai/iai_kinect2/tree/libfreenect2-upstream branch I have modified the depth registration to be easier to use. That one just needs to be initialized with the camera parameters and image resolutions of both sensors, distortion coefficients of the depth sensor and the translation and rotation between the depth and color sensor. And than you simply have to give the raw depth image as input.
So if you want to create a lower resolution depth image you can simply half the intrinsic and registered image size.
Right now the rotation and translation has to be computed for each sensor. That can be done with the calibration tool I provided. Or you can simply use the values that are in the repository, but those might not be perfect for other sensors.
Perfect, thank you for the exhaustive answer. Just to be sure, could you quickly check if my parameters are in some plausible range? I will have a look at the new repo, but also the one before should work; I just need to figure out where the problems is.
Those look fine. In https://github.com/code-iai/iai_kinect2/tree/master/kinect2_bridge/data are the parameter I got from calibrating my sensors.
Ok, thanks. The new interface should use the following sequence if I am correct: 1)init(...) 2)registerDepth(...)
Nice improvement :)
yep.
Thanks, small suggestion, change the cameraMatrixRegistered name. It does not really remind of the camera parameters matrix.
It should mean something like "camera matrix of the sensor to which the depth image should be registered". What would you suggest instead?
If I understood correctly its the camera matrix which before you called cameraMatrixColor right? I would leave that name or something like cameraParameterMatrixColor
yes, but you can register it to any other camera, so it does not have to be a color camera... that's why i changed it.
Ok, seems logic.
Maybe this could be useful also for other people if using this code with the kinect v2 (correct me if I am wrong): cameraMatrixRegistered: The Color camera sensor parameters matrix sizeRegistered: 3x3 cameraMatrixDepth: The Depth camera sensor parameters matrix sizeDepth: 3x3 distortionDepth: Distorsion matrix (1x5) rotation: The rotation between the sensors (3x3) translation: The translation between the sensors (1x3) zNear: The cut limit near zFar: The cut limit far
sizeRegistered: cv::Size(1920,1080) sizeDepth: cv::Size(512,424) distortionDepth: cv::Mat 1x5 {k1, k2, p1, p2, k3} (is used for undistorting the depth image. There are default values for each sensor. readable through libfreenect) translation: 3x1 camera matrix:
[ fx 0 cx ]
[ 0 fy cy ]
[ 0 0 1 ]
Ok thanks
Today I tested your new version, but there is a problem.
frames_ = *GetRawFrames();
rgb_ = frames_[libfreenect2::Frame::Color];
depth_ = frames_[libfreenect2::Frame::Depth];
tmp_depth_ = cv::Mat(depth_->height, depth_->width, CV_32FC1, depth_->data);
tmp_rgb_ = cv::Mat(rgb_->height, rgb_->width, CV_8UC3, rgb_->data);
if(depthReg_ == 0){
cloud_->height = tmp_rgb_.rows;
cloud_->width = tmp_rgb_.cols;
cloud_->is_dense = false;
cloud_->points.resize(cloud_->height * cloud_->width);
cameraMatrixColor_ = (cv::Mat_<double>(3,3) <<
color_camera_params_.fx, 0., color_camera_params_.cx,
0., color_camera_params_.fy, color_camera_params_.cy,
0., 0., 1. );
cameraMatrixDepth_ = (cv::Mat_<double>(3,3) <<
ir_camera_params_.fx, 0., ir_camera_params_.cx,
0., ir_camera_params_.fy, ir_camera_params_.cy,
0., 0., 1. );
rotation_ = (cv::Mat_<double>(3,3) <<
9.9983695759005575e-01, -1.6205694274052811e-02,-7.9645282444769407e-03,
1.6266694212988934e-02, 9.9983838631845712e-01, 7.6547961099503467e-03,
7.8391897822575607e-03, -7.7831045990485416e-03, 9.9993898333166209e-01 );
translation_ = (cv::Mat_<double>(3,1) <<
-5.1927840124258939e-02, -4.5307585220976776e-04, 7.0571985343338605e-05 );
distorsion_ = (cv::Mat_<double>(1,5) <<
1.0290630865336201e-01, -2.8973989337475542e-01, 1.0698071918753883e-03,
1.1438966542906426e-04, 1.1137787974461313e-01 );
depthReg_ = new DepthRegistrationCPU();
size_registered_ = cv::Size(1920, 1080);
size_depth_ = cv::Size(512, 424);
depthReg_->init(cameraMatrixColor_,
size_registered_,
cameraMatrixDepth_,
size_depth_,
distorsion_,
rotation_,
translation_ );
}
depthReg_->registerDepth(tmp_depth_, registered_ );
with the following parameters taken from the camera:
color: fx 1081.37 fy 1081.37 cx 959.5 cy 539.5 ir fx 365.478 fy 365.478 cx 258.937 cy 207.594
registered_ is just in a sequence of zero values if printed out as uint16_t.
first of all the images are flipped. so you need to flip them
cv::flip(tmp_depth_, tmp_depth_, 1);
cv::flip(tmp_rgb_, tmp_rgb_, 1);
the depth registration need a depth image with uint16_t
values. try to convert it first tmp_depth_.convertTo(tmp_depth_, CV_16U);
Added that. Still problems but I will debug a little and see. Thanks!
but the depth image is not 0 right?
No, I was wrong, interpolation is not always 0. I will continue investigating and let you know as soon as I find something strange.
Fixing some problems I arrived at this result. Any ideas? The depth seems ok but everything is curved.
how do you compute the 3d points? I do it in the viewer like this:
void createCloud(const cv::Mat &depth, const cv::Mat &color, pcl::PointCloud<pcl::PointXYZRGBA>::Ptr &cloud) const
{
const float badPoint = std::numeric_limits<float>::quiet_NaN();
#pragma omp parallel for
for(int r = 0; r < depth.rows; ++r)
{
pcl::PointXYZRGBA *itP = &cloud->points[r * depth.cols];
const uint16_t *itD = depth.ptr<uint16_t>(r);
const cv::Vec3b *itC = color.ptr<cv::Vec3b>(r);
const float y = lookupY.at<float>(0, r);
const float *itX = lookupX.ptr<float>();
for(size_t c = 0; c < (size_t)depth.cols; ++c, ++itP, ++itD, ++itC, ++itX)
{
register const float depthValue = *itD / 1000.0f;
// Check for invalid measurements
if(isnan(depthValue) || depthValue <= 0.001)
{
// not valid
itP->x = itP->y = itP->z = badPoint;
itP->rgba = 0;
continue;
}
itP->z = depthValue;
itP->x = *itX * depthValue;
itP->y = y * depthValue;
itP->b = itC->val[0];
itP->g = itC->val[1];
itP->r = itC->val[2];
itP->a = 0;
}
}
}
I have fixed something and it looks better but it is not registered. I am using a similar code to yours. I tried yours but it is not working... My code looks like this:
void createCloud(const cv::Mat &depth, const cv::Mat &color, typename pcl::PointCloud<PointT>::Ptr &cloud) const
{
const float badPoint = std::numeric_limits<float>::quiet_NaN();
#pragma omp parallel for
for(int r = 0; r < depth.rows; ++r)
{
PointT *itP = &cloud->points[r * depth.cols];
const uint16_t *itD = depth.ptr<uint16_t>(r);
const cv::Vec3b *itC = color.ptr<cv::Vec3b>(r);
for(size_t c = 0; c < (size_t)depth.cols; ++c, ++itP, ++itD, ++itC )
{
register const float depthValue = *itD / 1000.0f;
// Check for invalid measurements
const double invZ = 1 / depthValue;
const int xP = (color_camera_params_.fx * c) * depthValue + color_camera_params_.cx;
const int yP = (color_camera_params_.fy * r) * depthValue + color_camera_params_.cy;
if(isnan(depthValue) || depthValue <= 0.001)
{
// not valid
itP->x = itP->y = itP->z = badPoint;
itP->rgba = 0;
continue;
}
itP->z = depthValue * 1000 ;
itP->x = xP * 0.001;
itP->y = yP * 0.001;
itP->b = itC->val[0];
itP->g = itC->val[1];
itP->r = itC->val[2];
}
}
}
And I get the following cloud :
Fixed something else and it looks better. With the previous code I was projecting everything again getting strange results. This code works almost fine but the cloud is still not registered and it looks stretched (depth) and not straight (the wall on the right).
void createCloud(const cv::Mat &depth, const cv::Mat &color, typename pcl::PointCloud<PointT>::Ptr &cloud) const
{
const float badPoint = std::numeric_limits<float>::quiet_NaN();
#pragma omp parallel for
for(int r = 0; r < depth.rows; ++r)
{
PointT *itP = &cloud->points[r * depth.cols];
const uint16_t *itD = depth.ptr<uint16_t>(r);
const cv::Vec3b *itC = color.ptr<cv::Vec3b>(r);
for(size_t c = 0; c < (size_t)depth.cols; ++c, ++itP, ++itD, ++itC )
{
register const float depthValue = *itD / 1000.0f;
// Check for invalid measurements
const double invZ = 1 / depthValue;
const int xP = (color_camera_params_.fx * c) + color_camera_params_.cx;
const int yP = (color_camera_params_.fy * r) + color_camera_params_.cy;
if(isnan(depthValue) || depthValue <= 0.001)
{
// not valid
itP->x = itP->y = itP->z = badPoint;
itP->rgba = 0;
continue;
}
itP->z = depthValue * 1000 ;
itP->x = xP * 0.001;
itP->y = yP * 0.001;
itP->b = itC->val[0];
itP->g = itC->val[1];
itP->r = itC->val[2];
}
}
}```
try this:
register const float depthValue = *itD / 1000.0f;
// Check for invalid measurements
if(isnan(depthValue) || depthValue <= 0.001)
{
// not valid
itP->x = itP->y = itP->z = badPoint;
itP->rgba = 0;
continue;
}
itP->z = depthValue;
itP->x = ((c - color_camera_params_.cx) / color_camera_params_.fx) * depthValue;
itP->y = ((r - color_camera_params_.cy) / color_camera_params_.fy) * depthValue;
itP->b = itC->val[0];
itP->g = itC->val[1];
itP->r = itC->val[2];
Ok, the cloud looks perfect now but not aligned
the alignment is a calibration problem. so if you calibrate your sensor you will get better results.
Ok I will try to calibrate it. You have been really helpful, thank you again.
your welcome. could this be closed now?
I would suggest to just wait 1-2 day to be sure everything is fixed if it is ok for you.
You can close, thanks
Hello, I am working on simple wrapper to use libfreenect2 to get a pcl point cloud without ros. I used part of your code for registration and I did the following
I am not sure about the registration API and I am not getting a nice cloud. Is the call order correct? Thanks