Closed Outssiss closed 9 months ago
Without the HMD pose matrix, the coordinates are relative to the origin of the HMD, which is usually inside the head. The Z coordinates of the square are set to 0, so it would render as if it was behind the eyes.
You can't really do screen-space rendering in VR. It's better to render objects a comfortable distance from the eyes. Try setting the Z coordinate to something like 1 or 2 meters. Also, it won't project correctly if you change the last row of the matrix.
Without the HMD pose matrix, the coordinates are relative to the origin of the HMD, which is usually inside the head. The Z coordinates of the square are set to 0, so it would render as if it was behind the eyes.
You can't really do screen-space rendering in VR. It's better to render objects a comfortable distance from the eyes. Try setting the Z coordinate to something like 1 or 2 meters. Also, it won't project correctly if you change the last row of the matrix.
Then this should be good?
Matrix4 openvr::ConvertSteamVRMatrixToMatrix4(const vr::HmdMatrix34_t& matPose)
{
Matrix4 matrixObj(
matPose.m[0][0], matPose.m[1][0], matPose.m[2][0], 0.0,
matPose.m[0][1], matPose.m[1][1], matPose.m[2][1], 0.0,
matPose.m[0][2], matPose.m[1][2], matPose.m[2][2], 0.0,
matPose.m[0][3], matPose.m[1][3], matPose.m[2][3], 1.0f
);
return matrixObj;
}
Matrix4 openvr::GetHMDMatrixPoseEye(vr::Hmd_Eye nEye)
{
vr::HmdMatrix34_t matEye = m_pHMD->GetEyeToHeadTransform(nEye);
Matrix4 mat4OpenVR = Matrix4(matEye.m[0][0], matEye.m[1][0], matEye.m[2][0], 0.0,
matEye.m[0][1], matEye.m[1][1], matEye.m[2][1], 0.0,
matEye.m[0][2], matEye.m[1][2], matEye.m[2][2], 0.0,
matEye.m[0][3], matEye.m[1][3], matEye.m[2][3], 1.0f);
return mat4OpenVR.invert();
}
Matrix4 openvr::GetHMDMatrixProjectionEye(vr::Hmd_Eye nEye) {
vr::HmdMatrix44_t mat = m_pHMD->GetProjectionMatrix(nEye, m_fNearClip, m_fFarClip);
Matrix4 mat4OpenVR = Matrix4(mat.m[0][0], mat.m[1][0], mat.m[2][0], mat.m[3][0],
mat.m[0][1], mat.m[1][1], mat.m[2][1], mat.m[3][1],
mat.m[0][2], mat.m[1][2], mat.m[2][2], mat.m[3][2],
mat.m[0][3], mat.m[1][3], mat.m[2][3], mat.m[3][3]);
return mat4OpenVR;
}
Matrix4 openvr::getCurrentViewProjectionMatrix(vr::Hmd_Eye nEye)
{
Matrix4 matMVP = Matrix4();
Matrix4 matEye = GetHMDMatrixPoseEye(nEye);
Matrix4 matProjection = GetHMDMatrixProjectionEye(nEye);
Matrix4 matHmd = ConvertSteamVRMatrixToMatrix4(m_rTrackedDevicePose[vr::k_unTrackedDeviceIndex_Hmd].mDeviceToAbsoluteTracking);
matMVP = matEye * matProjection * matHmd;
return matMVP;
}
If I use it like this the square is always visible and simply streches and deforms when moving the head but there is still some misalignment between both eyes when looking through the headset, it's easly noticiable when looking at the edges of the shape.
I'm not sure if I should invert matHMD, because if I do things only seem to get worse. The square is always around the corners and disappear easily when moving the head around.
Also how do I set the Z coordinate to 1 or 2 meters like you recommended?
Btw thank you for all the help you've been giving me on all my issues, it's been a life saver honestly.
One way is to add a z offset directly to the vertices on line 91 in VrApp.ccp in your original code. Not sure if it needs to be positive or negative.
The order of the matrix multiplications in the code above don't really make sense. Each matrix represents a transform from one reference space to another. It helps to think about where those spaces are located. The inverted matHmd goes from the VR tracking space to the headset origin, matEye goes from the headset origin to the optical center of the lens-display assembly, and matProjection is a projection to the display plane.
For something that moves relative to the headset, you should only need matMVP = matProjection * matEye
; (the Matirx4 class composes them them from right to left), which makes the vertex coordinates relative to the headset origin.
One way is to add a z offset directly to the vertices on line 91 in VrApp.ccp in your original code. Not sure if it needs to be positive or negative.
The order of the matrix multiplications in the code above don't really make sense. Each matrix represents a transform from one reference space to another. It helps to think about where those spaces are located. The inverted matHmd goes from the VR tracking space to the headset origin, matEye goes from the headset origin to the optical center of the lens-display assembly, and matProjection is a projection to the display plane.
For something that moves relative to the headset, you should only need
matMVP = matProjection * matEye
; (the Matirx4 class composes them them from right to left), which makes the vertex coordinates relative to the headset origin.
Okay it makes it easier to understand the matrix when seen that way but by just using matMVP = matProjection * matEye
the square is compleatly gone (I assume it´s on the origin of the headset, that's why it can't be seen). And applying an offest to it's vertices only seems to enlarge or shrink the square itself, it doesn't make it move along the Z axis back or forth.
A good way of debugging it is using the RenderDoc utility to capture a frame, and using the Mesh Viewer window to see where the vertices end up being rendered relative to the camera.
A good way of debugging it is using the RenderDoc utility to capture a frame, and using the Mesh Viewer window to see where the vertices end up being rendered relative to the camera.
That tool really helped out, thanks, I was able to see that the square was beign rendered right behind the eyes because the last row of gl_Position were values under 1, so I added 1.0 to every value from that row to maintain the proportions, but, sadly, the eyes still do not align correctly and now I really feel lost. I can see the shape right in front of me and each eye is slightly different, but I can still see some separation on the edges of the square.
This is how both eyes look on RenderDoc:
Where are you adding the values? If it's to the matrix, any transforms should be added first, i.e. matMVP = matProjection * matEye * matMyModel
Where are you adding the values? If it's to the matrix, any transforms should be added first, i.e.
matMVP = matProjection * matEye * matMyModel
I was adding them straight to the gl_Position matrix last row on the shader. I will try to do it before like you said and see if that works. Thanks
IT'S FIXED. It was all because on the line to load the MVP matrix to the shader's uniform glUniformMatrix4fv(m_nQuadCameraMatrixLocation, 1, GL_FALSE, currentViewProjectionMatrix.get());
I had accidentally left the transpose parameter on GL_TRUE, now on false the eyes align perfectly.
I have defined a very simple scene, with a flat 2D square defined by 2 triangles and I have obtained the MVP matrix by multiplying the projection obtained from GetHMDMatrixProjectionEye with the eye pos obtained from GetHMDMatrixPoseEye and then applied it to the coords of the square on the shader vertex file.
I have not taken into account the HMD pose because, for now, I don't want the square to move as I move my head, I just want a flat, static 2D square that can be seen correctly on the headset.
I know that the matrix are applying to the coordinates but when looking through the headset the borders of the square dont fully align.
If I apply the HMD pose matrix to the MVP it now moves around as I move my head but it still doesn't look compleatly aligned.
Also I have had to set the last row of the resulting MVP matrix to (0, 0, 0, 1) so that the square was always visible and rotated in place but didn't move around the world.
I'm not sure what I am missing to get them compleatly aligned. And I know it's hard to see the issue with just those pictures, the project is uploaded here: https://github.com/Outssiss/CameraProjectVR