psmoveservice / PSMoveService

A background service that communicates with the psmove and stores pose and button data.
Apache License 2.0
593 stars 148 forks source link

Each camera (x lens) needs a default camera matrix and (null) distortion coefficients #50

Open cboulay opened 8 years ago

cboulay commented 8 years ago

After the contour is found, but before we subtract off the centre, we should try to use cv::undistortPoints.

For this to work, we need a camera matrix and distortion coefficients. The camera matrix we can guess at if we know its resolution and FOV. The distortion coefficients should probably start off with whatever values yield no distortion.

Then, for each camera, we can use a checkerboard calibration app to get its parameters after which those parameters will be loaded instead.

HipsterSloth commented 8 years ago

I was thinking we could port over the tracker_camera_calibration tool from psmoveapi which computes both the camera intrinsics (focal length and principal point) and the distortion coefficients. There is already a ServerTrackerView::setCameraIntrinsics() method for saving the the camera intrinsic values into the camera's config, but no protocol message for sending that info from the config tool yet.

HipsterSloth commented 8 years ago

@cboulay I started work on the distortion calibration tool this weekend, but only got about 2-3 hours of work into it (hosted a pig roast at my house this weekend whose setup and tear down used up most of my time). What I have done so far is checked into the generic_camera branch. I'll try and get this done in the next few days.

HipsterSloth commented 8 years ago

@cboulay I have the who distortion calibration tool + config state in place now. The last piece I need to do is actually apply the distortion in the service. I saw this comment:

                 // TODO: cv::undistortPoints  http://docs.opencv.org/3.1.0/da/d54/group__imgproc__transform.html#ga55c716492470bfe86b0ee9bf3a1f0f7e&gsc.tab=0
                // Then replace F_PX with -1.

So does this mean I should call cv::undistortPoints on the contour I get back from OpenCVBufferState::computeBiggestConvexContour. And if so it is in fact ok to pass in -1 into eigen_alignment_fit_focal_cone_to_sphere for the focal length since undistortPoints automatically takes the camera intrinsic matrix into account?

I assume it's better to do it this ways rather than apply the distortion map to the video feed because this is cheaper to compute?

cboulay commented 8 years ago

this is cheaper to compute

That was my thinking too. What I don't know is if undistortion prior to getting the contour would affect the contour itself. Let's say we have our distorted pixels d, and our undistorted pixels s (for source). If the undistortion uses a map such that s = Ad, where A is a matrix with non-zero off-diagonal elements, then it's possible that the number of pixels in s that are above threshold is different to the number of pixels in d that are above threshold. In other words, doing d->contour->undistort may not yield the same x-y locations as doing d->undistort->contour.

This is worth testing. If the resulting pixel coordinates are the same no matter the order, then using undistortPoints should be faster.

HipsterSloth commented 8 years ago

I just tried the cv::undistortPoints approach (i.e. d -> contour -> undistort). It appears to put the points into some normalized camera space (the contours generated were really tiny and near the origin). Looking at the docs for cv::undistortPoints it looks like they apply the intrinsic matrix first so that the returned coordinates are "normalized so that they do not depend on the camera matrix". All of the other tracker code assumes that we're dealing in a centered pixel coordinate space i.e. [-320, -240]x[320,240] so I think I need to apply the inverse of the intrinsic matrix to the contour I get back from cv::undistortPoints.

I just checked in a version of the undistortion that uses the cv::remap approach since that requires the least amount of adjustment to the ServerTrackerView. Next week I'll try and compare the two approaches.

I have to switch gears back to the DualShock4 tracking the next few days.

cboulay commented 8 years ago

@gb2111 ,

I actually think this is a prerequisite to improving tracker performance. Because you have 4 cameras, can you run them through a caliibration tool and see what their intrinsic parameters are? @HipsterSloth and I can compare that to what we have then come up with an average or 'default' set of parameters. Please do it for both the red-dot setting and the blue-dot setting on your cameras.

For calibration tools, you can try the one that comes with psmoveapi, or better yet you can port that to PSMoveService's config tool.

We can load the camera parameters from a config file, defaulting to the above mean values, and then we can properly undistort the image before finding the contour.

gb2111 commented 8 years ago

not sure if that's what you asking... for blue dot and red dot as output from m_device->getCameraIntrinsics(F_PX, F_PY, PrincipalX, PrincipalY) i get 554.26, 554.26, 320.00, 240.00

if that was not what needed, pelase advise more specific.

cboulay commented 8 years ago

I'd like you to run a camera calibration tool. This needs to be done separately for each camera, and separately for each of the red-dot and blue-dot settings. The values you get back should be different from the default values you listed above.

You can find a tutorial on how to write and run a camera calibration app here. Note that the tutorial was written for OpenCV2, so there might be some differences in OpenCV3. Ultimately we'd like such an app in PSMoveService's config tool, so if you're willing to write it then that would be great.

But, if not, you can find a pre-compiled version of psmoveapi's camera calibration app here. The source code is found here. The code is very similar to OpenCV's camera calibration tutorial code, just adapted a little for the PSEye camera. After you run the program and go through the calibration steps, it should save a result in the ~/.psmoveapi directory. While the filename should have the camera serial in it, it won't have _red or _blue, so you'll have to rename the file before switching the camera setting and running the calibration again.

HipsterSloth commented 8 years ago

But, if not, you can find a pre-compiled version of psmoveapi's camera calibration app here. The source code is found here. The code is very similar to OpenCV's camera calibration tutorial code, just adapted a little for the PSEye camera.

This calibration tool already exists in the generic camera branch (it uses the opencv methods outlined). I hadn't ported it to master yet because I couldn't get the calibration return reliable yet. Before anyone writes any new camera calibration code they should compare it to this first:

https://github.com/cboulay/PSMoveService/blob/generic_camera/src/psmoveconfigtool/AppStage_DistortionCalibration.cpp

gb2111 commented 8 years ago

psmoveapi.zip

in folder for each camera there are few subfolders with different attempts.

@cboulay , I attach results from each of the camera. They are different however once calibrated we can see there is difference between raw image. It looks very promising for me to improve accuracy in we get this implemented. I have results only for blue as for red I got a popup to select camera and it was not getting anywhere. But since we use blue I hope we can get that done first and in meanwhile I will try to do sth with red. Its same camera anyway so I am guessing it would be helpful anyway ;)

Let's get it started with these data as it definitively will need some tests. Thanks.

gb2111 commented 8 years ago

I tried few times with different sets of values but I values that I could not interpret - between 0 and 1 that I could not relate to original values. I wonder if you were looking on these data and if they look like valid?

gb2111 commented 7 years ago

@HipsterSloth , @cboulay , I have seen you wrote about your attempt to implement undistort already here https://github.com/cboulay/PSMoveService/issues/50#issuecomment-234383275 I also started working on it but as I have seen on google groups Chad is working on same so I better leave. To address problem you described, you need simply pass camera matrix back to P parameter which is null by default. http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#undistortpoints

I have code that is doing it and reads input from xml created by your calibration program. If you want this even as starting point I can commit.

The best would be to change type in service from cv::Point to cv::Point2f however it changes definition of many functions and requires testing which I cant do like dualshock.

My tests with undistort did not solve problem of controller shift when you lose or get camera. It was even bigger. Also calibrations were giving me different results so ones I used were probably far from good. This definitively needs more time or experience ;)

Greg

HipsterSloth commented 7 years ago

Hey Greg,

Yeah Chad is working on taking the undistortion work I started across the finish line. We just merged a ton of code from different branches back into master. There was enough changes going on with how the tracking code work that it was difficult to work on Region-of-Interest optimization and distortion tuning without pulling all of the in-progress work together. So in the master branch we now have the following in-progress pieces of code:

Point being there is a ton of churn in the master branch right now. You might want to wait a few days for the dust to settle before syncing. I've done a quick bit of smoke testing on this but I'm almost certain there will be something broken.

Thanks for doing testing on this by the way. I'm hoping to make a custom build tonight so you can try out the kalman filter. I'll post a link here if I can get that ready. If not tonight, then certainly by the end of the weekend after I'm done with Thanksgiving.

cboulay commented 7 years ago

@gb2111 , I'm still interested in the work you've done. There are probably too many changes between what's now in master and what you were working on for a commit&pull request to be useful. Instead, you can make a commit to your personal branch and then just point me to the relevant commit(s) and I can read and interpret the changes you made then port from there as needed.

gb2111 commented 7 years ago

@cboulay , sorry but it seems that I lost this work even though I took a copy prior to refresh from branch. really have no idea how I could lost it... Anyway, once I read xml files produced by your old app to cameraMatrix and distCoeffs and I do following:

  1. copy biggest_contour to std::vector<<cv::Point2f>> so they are float. the output will go to biggest_contour_undistort of same type.
  2. call undistortPoints and use cameraMatrix also as P parameter (so its used 2 times) undistortPoints(biggest_contour, biggest_contour_undistort, cameraMatrix, distCoeffs, noArray(), cameraMatrix)
  3. you need also change convex to float.

In case of questions please let me know. Greg