Open whitealex95 opened 3 years ago
Hi, the mean and var are necessary because we use a per-character normalization to stable the training. While we didn't have a thorough analysis with the effects of normalization, we know that networks are likely to cope better with normalized data.
One possible solution (besides the retraining you mentioned) may be to use the mean and var of the most similar character that has motions.
Hi, I have some more questions.. about the normalization. I am okey with having the normalizing the distances. However, what does it mean when you normalize the quaternions? Does normalized quaternion have any physical meaning? As far as I know, addition and division in quaternion results does not mean addition in rotation in physical space
Hi, the normalization doesn't have any explicit physical meaning. We use normalization basically because it's a very common trick in deep learning community, that let the network generate something that has zero mean and identical variance will help to stable the training and gain better performance.
Hi, it seems that for retargeting a new model (test.bvh), we need mean and variance of root offsets and quaternions saved in the mean_var folder.
However, since the new model(test.bvh) is newly rigged, there is no motion data generate mean and variance data from. What should we do in this case?
Is training the model without normalization (setting
--normalization = 0
) the only solution? When I tried training without normalization (setting--normalization=0
), the results seems to be way worse than using normalization. Do you have any idea regarding this?