alvr-org / ALVR

Stream VR games from your PC to your headset via Wi-Fi
MIT License
5.26k stars 473 forks source link

XR_FB_eye_tracking_social #1744

Closed Hope10086 closed 1 year ago

Hope10086 commented 1 year ago

I was wondering if you could answer some of my questions about the call to openxr's eye tracking extension 12.55. XR_FB_eye_tracking_social.
I noticed that the official documentation for openxr says “The xrGetEyeGazesFB function obtains pose for a user’s eyes at a specific time and within a specific coordinate system“ But I tried to use different times as input arguments to the get_eye_gazesfunction and called get_eye_gazes several times to get the same output pose.

    let face_data = if let Some(context) = &ctx.face_context {
        FaceData {
            eye_gazes: interaction::get_eye_gazes(
                context,
                &ctx.reference_space.read(),
                //to_xr_time(now),
                to_xr_time(target_timestamp),
            ),
            eye_gazes_now: interaction::get_eye_gazes(
                context,
                &ctx.reference_space.read(),
                //to_xr_time(now),
                to_xr_time(now - alvr_client_core::get_head_prediction_offset()),
            ),
            fb_face_expression: interaction::get_fb_face_expression(context, to_xr_time(now)),
            htc_eye_expression: interaction::get_htc_eye_expression(context),
            htc_lip_expression: interaction::get_htc_lip_expression(context),
        },

    } else {
        Default::default()
    }

This is quite inconsistent with my understanding. What should it actually be like?

Best Regards.

zarik5 commented 1 year ago

to_xr_time(now - alvr_client_core::get_head_prediction_offset())

What were you tring to achieve here? get_head_prediction_offset is supposed to be added, not subtracted. But in any case, contrary to head and hand tracking, eye tracking is used mostly for aestethics, and so I decided to not predict the pose (passing to_xr_time(now)), as the output will be less jittery.

Hope10086 commented 1 year ago

My original purpose: eye tracking using predictive pose(passing to_xr_time(target_timestamp) to reduce fixation point position inaccuracies due to fixed delay. Then to see the accuracy of the prediction, I want to get the pose at the now time and the target_timestamptime in the same timestamp but then output poses is same . Finally,I changed the value of the xr_time (like :now - alvr_client_core::get_head_prediction_offset() ,now --100*alvr_client_core::get_head_prediction_offset()) parameter several times and the output poses is same.
And there was no observable difference in my results about visual fixation points(I visualized this in FrameRender.cpp by copying a black texture to the fixation points ' locations )

zarik5 commented 1 year ago

Probably the VR runtime deleted poses for times << now, since usually they are not useful.

Hope10086 commented 1 year ago

But we get gaze poses at target_timestampare also same as the one we get at now, so it doesn't look like the predicted value is going to work

zarik5 commented 1 year ago

If that is the case, then it means Meta did not implement prediction at all, or it is a bug on their side.