Open gb2111 opened 3 years ago
I expect it would be through the input API, in which case whoever makes the first hardware with a SteamVR driver, would probably define what the paths are.
You are right. That seems to be the correct way. Unless there is something specific that I don't see now. In the next couple of months it is up to HTC I guess.
I was asking this myself to be honest, especially since we have the Vive tracker providing blendshape data in one way and Apple AR providing it in another. Literally it would appear that that only consensus appears to be largely using shape key data instead of, or supporting facial landmarks.
Personally, I've forked the repo to have a look at adding in shape key floats as something like vr::VRInput()->GetFacialKeyData()
mapped to values such as VRFacial_NoseSneer_R
, VRFacial_EyeLook_U
, VRFacial_MouthClose
, etc.
(to adapt a few examples from Unity/AR Kit)
It's probably beyond my ability to implement or influence to even get a PR accepted, but it's worth a shot.
Well its been 7 months, now me and my buddy need facial tracking to be supported in our driver, and since there seems to be 0 progress on the topic, we'll do it ourselves.
We already have eye tracking working(and mouth tracking in the works) and hardware for it, the only thing we're missing is the driver implementation. And i''m responsible for the driver implementation 🥲
I'll try using the input API where possible, hopefully i won't have to reinvent the wheel...
@okawo80085 I'd like to follow your progress on that if that's possible? My own efforts on a Pi Zero based solution have stagnated a little since putting a hole in my 3D printer's polarising film.
@okawo80085 I'd like to follow your progress on that if that's possible? My own efforts on a Pi Zero based solution have stagnated a little since putting a hole in my 3D printer's polarising film.
We started working on it very recently, we'll post general progress in this feature request and here
But most of the discussion related to this project is in discord, link to the server we hang out in
Small update on eye and face tracking, we decided to stick with events for now. Data formats needed for face and eye tracking are too complex for the current VRInput API, and the easiest way to send custom structs to applications is to make our driver send out vendor events, we didn't decide on the final structs that will be sent though.
Well, using events to pass full eye tracking state didn't work, so we switched to using events to signal if eye tracking is active or not and use shared memory to actually pass gaze state (shared memory is owned and created by the driver device), new driver device is almost done (still needs some cosmetic tweaks).
Sorry for bumping this stale issue but it seems like this is being worked on, found in Lighthouse drivers in 2.8.1 by SaddlyItsBradly.
There are already extensions to OpenXR to support these features. If the issue is to specifically have the SteamVR OpenXR runtime support these extenstions then it's about requesting Valve to include these (or use a different PC OpenXR runtime that does): https://github.khronos.org/OpenXR-Inventory/extension_support.html#XR_EXT_eye_gaze_interaction
I know it might be early but is there any plan yet to add API for lip and eye-tracking? Thank you