Yellow-Dog-Man / Resonite-Issues

Issue repository for Resonite.
https://resonite.com
141 stars 2 forks source link

vmc protocol #231

Open vexvesper opened 1 year ago

vexvesper commented 1 year ago

Is your feature request related to a problem? Please describe.

only problem this could solve is broke ppl still being able to move in VR

Describe the solution you'd like

some sort of VMC implementation so that others that only have a webcam can use vseeface or similar to send tracking data to their avatar via VMC protocol. This would allow for more vtubers and the like without a VR setup to use Resonite for a more interactive experience.

Describe alternatives you've considered

https://github.com/Ruzeh3D/NeosWCFaceTrack was used way back when for just face tracking. not sure how it works but VMC would allow not just face, but full body with the right software.

Additional Context

EDIT* movement with vmc could be gestures. eg, open palm for laser and close palm for click with radial menu (move being an option). 2 handed pinch to scale, ect

shiftyscales commented 8 months ago

Related: #1537.

shiftyscales commented 8 months ago

Found some documentation on this protocol here: https://protocol.vmc.info/english.html

copygirl commented 1 month ago

Another VTuber I know expressed interest in Resonite and too was wondering if alternative tracking methods could be used to move one's model inside Resonite. The VMC protocol seems to be a pretty common standard in the VTubing space, allowing different programs to communicate. As a perhaps relevant example, XR Animator supports full body tracking from just a camera input, and many other applications support pretty okay hand tracking as well. Not to mention the extended ecosystem of various hardware trackers that can speak the protocol as well.

Beyond the obvious potential improvement to regular flat screen users being able to express themselves, this could also open up more possibilities for more types of users, such as non-VR VTubers using Resonite as their preferred rendering engine, that they can script, use existing resources such as worlds, and potentially even interact with their viewers in new interesting ways.

If an implementation will be worked on, I would recommend making it possible to merge data from multiple VMC sources, sort of like VNyan, which allows you to specify how much of a particular source affects individual tracked bones / blendshapes, if I recall correctly. Some people combine webcam tracking with phone tracking for better results, for example.

FlameSoulis commented 1 month ago

Having an API for user-made devices may be the ultimate solution here. That or the more direct feature request being allowing desktop users have positional point targets of their own and face tracking values.

I'm thinking a VMC to OSC bridge can be used to resolve the face tracking part, but it won't track rotation and position, which would be more helpful.

Frooxius commented 1 month ago

I consider that to be a separate problem.

We do have general plan to open and modularize the device driver system, so it's easy for community to make their own or modify the official ones.

But there's still benefits to having one officially, since that will make it so people don't need to find them and install them from 3rd parties and the functionality will just work straight out of the box.