UPC-ViRVIG / MMVR

Repository for the SCA 2022 paper "Combining Motion Matching and Orientation Prediction to Animate Avatars for Consumer-Grade VR Devices"
https://upc-virvig.github.io/MMVR/
Other
61 stars 7 forks source link

Few doubts related to recording and preparing the data #4

Closed tejaswiniiitm closed 2 years ago

tejaswiniiitm commented 2 years ago

Hi @JLPM22 ,

  1. How is the motion with users of different heights (like there might be children, adults etc) handled? Know that motion matching DB is chosen based on the height ratio. a. But how is the avatar model handled? Is avatar's scale changed as per the height? b. In DB, is it needed to have data recorded with different heights to work for people with any height range? If not, what happens if experiencing person is with different height from the recorded? Will the vectors and values be scaled appropriately, so that motion animates as expected?
  2. In the paper, it was mentioned that controller and HMD transforms were computed for training data by adding the offsets to th recorded data. Where can that offsets be changed/configured?
  3. In the paper, it was mentioned to have higher alpha(position accuracy parameter) if third person character, and lower alpha incase of self-avatar. Where can this alpha be changed?
  4. Full body motion setup is given for self-avatar in the project. In order to apply it for third person character, I think we need to someway store the data being applied to avatar's bones and transforms in OnSkeletonTransformUpdated method of MotionMatchingSkinnedMeshRenderer to some json file, and continuously apply them to the other third person character in update() method. Am I correct? Or is there any better way, or the scripts/tool which you have already written for third person character motion?

Thank you!

JLPM22 commented 2 years ago

Hi @tejaswiniiitm

  1. I have a script in the scene that when the user presses the Right Controller's B button, it takes their current height and scales the avatar's height accordingly. So you could ask users while standing to press that button. The motion should work for different users' heights, however, if the height deviates too much, there may be self-intersections or, for instance, an adult locomotion sequence may not make sense when applied to children. For those cases, I would prepare different MotionMatchingData with different recorded animations and select them based on the user.

  2. You can find the creation of the trackers database for training in the following script: https://github.com/UPC-ViRVIG/MMVR/blob/main/MMVR/Assets/DirectionPrediction/TrackersDataset.cs. Line 32 defines the offsets (local space of the trackers).

  3. You can find it in the VRCharacterController, the property Max Distance Simulation Bone

  4. I guess you want to use third person character for multiplayer/collaborative. Yes, the best solution would be to store the bones and transforms as you said, and then send them to other PCs using some multiplayer library (like Netcode for GameObjects). I would apply it in the LateUpdate(), just in case you are using some Unity's animation features, it will override bone transforms applied in Update().

Hope it helps ! :)

tejaswiniiitm commented 2 years ago

Okay, got it. Thankyou for clearing all my doubts patiently!