Yellow-Dog-Man / Resonite-Issues

Issue repository for Resonite.
https://resonite.com
122 stars 2 forks source link

Support VRCFT OSC for Native Eye and Face tracking managers. #1843

Open sveken opened 2 months ago

sveken commented 2 months ago

Is your feature request related to a problem? Please describe.

Currently as new headsets come out either community support for adding them via mods is required or time must be taken from the team to implement them Natively into Resonite.

Describe the solution you'd like

Now that Resonite has the capability to handle OSC. Adding the option to support VRCFTs unified compressions parameters for direct face and eye tracking would open support up for pretty much all headsets and would leverage a much wider community for device support.

Some manufactures like Pimax for example directly release modules for VRCFT as their official social intergration for the Crystal. This would allow to piggy back off that work for eye tracking for that headset that is already done.

Describe alternatives you've considered

Currently the only method is to use the Mod VRCReciever however it does not drive the eye manager directly and relies on value copys.

Additional Context

Related to https://github.com/Yellow-Dog-Man/Resonite-Issues/issues/1841 But instead using the data to directly drive the face and eye managers making it behave like a native supported headset.

Requesters

No response

shiftyscales commented 2 months ago

As you noted, this is related to #1841, but also has some overlap with #975 / #1220.

In particular as @Geenz highlighted in #975:

Presently, we need a few things to make this work. Currently determining the specific integration points:

An OSC parser
An OSC eye and face tracking driver
An OSC compatible interconnect on either port 9000 or 9015

Do you have any implementation details you can provide on VRCFT, and what all it tracks/supports in terms of eye/face tracking, and if it would map 1:1 onto our existing systems, or if there would be additional work needed, e.g. additional expressions, etc. @sveken?

Seeking input from @Geenz on if this issue is sufficiently different from #975 which seems to share a lot of the underlying work required for this issue.

Geenz commented 2 months ago

Pretty much everything in #975 applies as prerequisites to this, and would (to some extent) directly enable it as well for people who choose to use VRCFT.

Another (preferred) approach is OpenXR support more generally - which we have an internal version of ALXR to help bridge this gap until Sauce ships.

Geenz commented 2 months ago

For unified expressions - that would be a different set of work to enable that. Presently we map everything to SRanipal’s blendshape parameters.

sveken commented 2 months ago

Yup unified expressions is just another naming standard for blendshapes for face/eye, reading into it more, it seems to be what vrcft maps all the different devices too to make the end result universal. However i belive it can be mapped to what is already in resonite.

As for what VRCFT can support eye/tracking wise, my understanding is it supports everything the device module supports as in the limitation is the device you are using. VRCFT is modular and every device has its own module that loads into it and provides tracking data, Pimax for example wrote their own official support module for it (for eye tracking), and there is a community module for the Crystal to support full range eye openness and dilation. Other projects like Project Babble (open source face tracking) also have modules.

The current official hardware support list is here https://docs.vrcft.io/docs/hardware/interface-compatibilities however there are alot of community modules for example even a steamlink one.

With how VRCFT Maps to existing systems, the process is explained alot better on hazres mod page here https://github.com/hazre/VRCFTReceiver?tab=readme-ov-file#how-it-works but ill try explain here too.

VRCFT normally reads a json file that has all of the avatars set up parameters and what it wants to be tracked. This json file is copied to a location in appdata and on avatar load, tells vrcft to load that specific avatar .json via OSC. This file can be manually copied or in the mods case is auto generated on install. For Resonite the json file could contain all off the eye/face tracking data Resonite supports, an example is here https://github.com/hazre/VRCFTReceiver/blob/main/static/vrc_parameters.json VRCFTs doc page on OSC params are also here https://docs.vrcft.io/docs/tutorial-avatars/tutorial-avatars-extras/parameters.

Following the project it intends to be a universal interface for all new hardware whether by company or DIY. Adding compatibility would open up support for alot of current and future hardware.

EDIT. There is also a spreadsheet on how vrcft maps between SRanipal and Unified expressions. This is for the unity template but it does show the naming scheme and list of parameters it also supports and the relation to the SRanipal names. https://docs.google.com/spreadsheets/d/118jo960co3Mgw8eREFVBsaJ7z0GtKNr52IB4Bz99VTA/edit#gid=0

Geenz commented 2 months ago

As noted - OSC would be needed first before we can support VRCFT. Unified expressions is still its own separate work - notably the expression driver would need some work to support multiple standards for this sort of thing given the space is somewhat fragmented presently (Quest, SRanipal, VRCFT, Pico, Babble, etc.). I’ve looked into it - and it’s not the most straightforward pile of work.

FlameSoulis commented 2 months ago

Couldn't some of it be simplified with plug-in libraries? Like, a simple, modular system that either the community could write as a step to allow internal processing of the facial systems or be officially composed? I get writing such a foundation would be adding some work, but given that the diversity of the standards doesn't seem to be getting slimmer (bear in mind, with the latest announcement of Horizon OS being made more open for other manufacturers, who knows what features will be put into the headsets and what standards they will follow).

EDIT: What about a kind of YAML setup?