UPC-ViRVIG / MMVR

Repository for the SCA 2022 paper "Combining Motion Matching and Orientation Prediction to Animate Avatars for Consumer-Grade VR Devices"
https://upc-virvig.github.io/MMVR/
Other
58 stars 7 forks source link

Model issues after reproducing Training & data preprocessing part #3

Closed tejaswiniiitm closed 1 year ago

tejaswiniiitm commented 1 year ago

Hi @JLPM22

As first step, I am trying to reproduce the training and data preprocessing part on the raw data given by you.

Procedure I followed: As part of it, I have generated MotionMatching data and Train, Test data using the raw bvh files given by you, using the editor scripts in MotionMatching, MotionSynthesis respectively.

I have updated the references of the SquatMotionMatchingController gameobject under Squat Datasets to the newly generated MotionMatching data assets.

I have trained the direction prediction model using python/training_direction.py with default_config provided by you., updating the train, test, model path to the latest new ones. Updated the model to refer to the new model generated in VRDirectorPredictor component.

Problem I am facing:

With all the above new settings, while testing in unity,

  1. When started from standing position in play mode, the direction of the avatar is totally wrong/opposite, with hands criscrossed. And when I'm turning towards left, it is moving forward etc.
  2. When started from sitting position in play mode, everything is fine (turning around, direction movement etc) and as expected, even after I stand up from the sitting position.
  3. To understand if MotionMatching Data is the issue or the trained direction predictor model is the issue, I updated the model to point to the model given by you, keeping the MotionMatching Data to point to the new ones generated by me. Then everything was fine even when started from standing position. So, the generated model is causing the issue!

Since there are no code changes for training, and the raw data, and data generation part, given by you, I am not able to understand the reason for this wierd behaviour ! How can this issue be solved?

I have uploaded the newly generated data, models here: https://drive.google.com/file/d/1NPiHn72TUHVADphRHut8SLvURBB0HSz8/view?usp=sharing

JLPM22 commented 1 year ago

Hi @tejaswiniiitm ! Thank you very much for the detailed description and feedback.

I tried the newly generated data, and I agree it is a problem with the model.

I realized the hyperparameters and default_config in the python/training_direction.py were an old version, I have updated the script with new parameters, I changed the following lines:

# Hyperparameters
use_tune = False
use_adam = True
epochs = 10
filename_input = "data/direction_predictor.onnx"
loss_type = "mse"  # "mse" or "dot"
gamma = 0.95  # Decay factor for the learning rate
# Learning
config = {
    "batch_size": tune.choice([64]),
    "hidden_size": tune.choice([32, 64, 128]),
    "number_hidden_layers": tune.choice([2]),
    "learning_rate": tune.loguniform(1e-4, 1e-3),
    "weight_decay": tune.loguniform(1e-2, 1),
    # "momentum": tune.uniform(0.0, 0.99),
}
default_config = {
    "batch_size": 64,
    "hidden_size": 32,
    "number_hidden_layers": 2,
    "learning_rate": 0.0003,
    "weight_decay": 0.035,
    "momentum": 0.9,
}
# Recursive Learning
number_recursions = 50

I have trained the model again with these parameters and now works as expected. You can try to train it again with the new hyperparameters and see whether it works.

Thank you!

tejaswiniiitm commented 1 year ago

Hi @JLPM22 Thankyou for your continuous support and quick replies ! I have tried training with the new settings given. The generated model is working as the model given by you. So, the above mentioned problem is solved.

One thing I observed is that (even with the model given), when hands are in same position and when head is rotated sidewards, the purple arrow showing body orientation is also rotating sidewards. I have NOT enabled 'Use HMD Forward'. There might be headset weightage also, but it is rotating perpendicular totally. But as per what is mentioned in the paper in the 4th section of the paper, this shouldn't happen right! Your comments on it?

JLPM22 commented 1 year ago

Hi @tejaswiniiitm

I think the problem is that the training data does not contain much motion with both hands together as we focused more on other type of interactions. Thus, the network did not properly learn these situations.

You can improve the prediction by training the model with animation files representing this type of movement (two hands together). I would add these specific animation files together with the ones I already provide in TrainMSData and train again.

However if you add your data, probably the skeleton is going to be different if you are not using Xsens, therefore, you may need to create an entire database with your system to avoid incompatibilities (so that all motions use the same skeleton structure and bones are oriented the same)

tejaswiniiitm commented 1 year ago

Okay got it @JLPM22 Will train adding varied types of data and check!