adynathos / AugmentedUnreality

Augmented reality for Unreal Engine 4
Other
287 stars 114 forks source link

Coordinate system ... #3

Open antithing opened 7 years ago

antithing commented 7 years ago

Hi, thank you for making this code available. I am putting together a similar, but much simpler, plugin that spawns cubes in the viewport from static aruco marker positions. My issue is, I cannot get the coordinate spaces to line up properly. I have tried swapping and negating axes, and just cant get it right. If you have a moment, could you please run me through how you solved this problem? Do you use the rVecs and tVecs returned from markerdetect? Or do you start with the matrix?

Is this:

const FTransform FAURArucoTracker::CameraAdditionalRotation = FTransform(FQuat(FVector(0, 1, 0), -M_PI / 2), FVector(0, 0, 0), FVector(1, 1, 1));

the only transformation that I should need?

Thank you!

alicranck commented 6 years ago

Hi,

I would be happy to hear if and how you solved this issue. I am also writing a simple aruco based detection module that integrates with UE and I am not sure at what point should I rotate the axis.

Thanks!

adynathos commented 6 years ago

Hello, The coordinate systems of Unreal and OpenCV are indeed different. Here are the steps performed in this plugin:

Getting the pose from ArUco link

int success = cv::aruco::estimatePoseBoard(
    pose_info->foundMarkerCorners, pose_info->foundMarkerIds, board,
    tr->cameraIntrinsicMat, tr->cameraDistortion,
    rotation_axis_angle, translation
);
pose_info->clearFound();

if (success > 0)
{
    pose_info->setTransform(rotation_axis_angle, translation);
    return true;
}

Converting the translation vector and rotation matrix to UE basis link


const cv::Mat_<double> TrackedPose::REBASE_CV_TO_UNREAL = (cv::Mat_<double>(3, 3) <<
    0, 1, 0,
    1, 0, 0,
    0, 0, 1
);

const cv::Mat_<double> TrackedPose::REBASE_UNREAL_TO_CV = TrackedPose::REBASE_CV_TO_UNREAL.t();
void TrackedPose::setTransform(cv::Mat_<double> const & rotation_axis_angle, cv::Mat_<double> const & translation)
{
    //for(int idx : {0, 1, 2}) Translation(idx) = translation(idx);
    Translation = translation;

    cv::Rodrigues(rotation_axis_angle, RotationMat);
    cv::Mat_<double> inv_rot(RotationMat.t());

    TranslationWorldToCam_U = REBASE_CV_TO_UNREAL * -1.0 * inv_rot * Translation;
    RotationMatWorldToCam_U = REBASE_CV_TO_UNREAL * inv_rot * REBASE_UNREAL_TO_CV;

Copying the values from these matrices to an FTransform link

// Unreal's projection matrices are for some reason transposed from the traditional representation
// so we index [c][r] to contruct that transposed mat

cv::Mat_<double> const& detected_rot = detected_pose->getRotationCameraUnreal();
for (int32 r : {0, 1, 2}) for (int32 c : {0, 1, 2})
{
    t_mat.M[c][r] = detected_rot(r, c);
}

cv::Mat_<double> const& detected_trans = detected_pose->getTranslationCameraUnreal();
for (int32 r : {0, 1, 2})
{
    t_mat.M[3][r] = detected_trans(r);
}

Depending whether you use the marker to position the camera or to position an object, it may be necessary to invert that transform.

alicranck commented 6 years ago

Thanks! I'll look into it

ezorfa commented 3 years ago

Hi @adynathos !

TranslationWorldToCam_U = REBASE_CV_TO_UNREAL * -1.0 * inv_rot * Translation; RotationMatWorldToCam_U = REBASE_CV_TO_UNREAL * inv_rot * REBASE_UNREAL_TO_CV;

I understand the RotationMatWorldToCam_U as this is the simple change of basis formula.

But I fail to really understand TranslationWorldToCam_U, Can you refer me to a place where I can see this transformation explained please? In my understanding, the translation vector should simply be REBASE_CV_TO_UNREAL * Translation.

Thank you!