ValveSoftware / openvr

OpenVR SDK
http://steamvr.com
BSD 3-Clause "New" or "Revised" License
6.12k stars 1.28k forks source link

Mixed Reality HMDs report incorrect eye-to-head shifts #727

Open mediadog opened 6 years ago

mediadog commented 6 years ago

When I hooked up my Acer MR headset using OpenVR in vvvv, things looked somewhat wacko, with the right eye having a slight pitch from the left. Looking at the eye-head transforms from GetEyeToHeadTransform for each eye, I found that besides the X rotation on the right eye, the left eye shift was 0,0,0, and the right eye X shift was the full interocular distance.

I tested this again with a Samsung Odyssey MR headset, and while the right eye pitch was gone (so that is an Acer problem), the reported left eye to head translation was still 0,0,0, and the right the full interocular X shift.

So not only are the eyes not properly split on X around the head position, but they are also both at the head zero for Z instead of shifted forward.

benbuzbee commented 6 years ago

The idea is pose * GetEyeToHeadTransform()^-1 for each eye will be correct, because pose itself is the pose of the left eye. If you want to get a point in between the two eyes, you could compute the two eyes this way and then interpolate to find the median pose

mediadog commented 6 years ago

I do not understand the logic of that - the HMD does not rotate around the left eye! Also that is not how the Vive works: each Vive eye to head transform returns an X translation +/- half the interocular distance and about -1.5CM in Z. Sounds like the Vive uses one standard and MR another. I'll have to test and see what the Rift does, but the Vive behavior sounds more correct.

benbuzbee commented 6 years ago

Curious what you're trying to do that requires you to get a center of rotation of the HMD? The Vive center of rotation is also likely not 1.5CM on Z from the left eye, that would be a rather small head :)

The reason for this implementation has to do with how the Mixed Reality platform does reprojection - it requires you to render with precise poses, which are given per eye, and there is no eye-to-head transform available. Depending on what your use case is, there may be a way to accomplish it, or we may have to think about it and see what we can do

mediadog commented 6 years ago

Yeah I figured it was an MR practice being exposed here. Naive user that I am, I expect the call GetEyeToHEADTransform to not really be GetEyeToLEFTEYETransform in some cases and not others - or more essentially, that when I get the HMD pose it is following a general practice of being the center of the HMD, not of one eye or the other, and that I have to guess which.

As a simple use case, if I am making an overview renderer for say presentation or debugging, or I'm putting another user into a multi-person scene where I show the HMD and eyes and/or camera frustums, things will look fine with the Vive but when a user/customer plugs in a MR HMD, one eye is in the center of the HMD and the other on the right edge. Makes us all look bad.

I think ideally the API docs would specify what the entities "eye" and "head" are that are referred to in the API calls, and what their properties are (like in the case of a Head where its origin is). I suspect in this case "Head" should really be "HMD", as each HMD size is a known quantity and each user's head is not; in this context the Vive Eye Z offsets makes more sense. And as there is precedent with the Vive of assuming the Head origin is the center of the HMD, which is pretty intuitive, I would suggest adopting that practice for MR and future HMDs as well.

(Hmmm, here's another thought - for the supplied HMD models, such as the one found in Steam\SteamApps\common\SteamVR\resources\rendermodels\generic_hmd, where is the model origin? The returned HMD pose should ideally match that as much as possible, then everything will look right.)