eth-siplab / AvatarPoser

Official Code for ECCV 2022 paper "AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion Sensing"
https://siplab.org/projects/AvatarPoser
MIT License
288 stars 49 forks source link

1 million epoch #8

Closed wdanc closed 1 year ago

wdanc commented 2 years ago

Thanks for your great work! This code is very friendly to reproduce the results!

You set 1 million training epochs. Does the network really need to train that many epoch to converge? Could you tell me the value of the loss when you stop training? My total loss doesn't seem to go down after it gets to about 2.5e-2, but I'm not sure if I should run through 1 million epochs.

I would greatly appreciate your reply.

jiaxi-jiang commented 2 years ago

Hi, thanks for your interest in our work!

The 1M is just a random large number to keep the training running, of course you do not need wait that long:) As I remember, 3K epochs should be enough for the training. You can just stop when the loss dosen't go down anymore.

wdanc commented 2 years ago

Hi, thanks for your interest in our work!

The 1M is just a random large number to keep the training running, of course you do not need wait that long:) As I remember, 3K epochs should be enough for the training. You can just stop when the loss dosen't go down anymore.

Thanks for your reply!

In my experiments, the loss converged at 2w epoch. Also, I tested the model you provided (avatarposer.pth) using main_test_avatarposer.py on test data split by data_split, which includes BioMotion, CMU and HDM05. I got an average rotation error of 1.98, an average position error of 2.51, and an average velocity error of 24.11. This is not consistent with table 1 in your paper and is better than it. Did you provide an improved model? I'm not sure if I'm getting something wrong.

Also, the weight of global_orientation_loss is 0.02 , while the paper says it is 0.05. Does this make some difference? https://github.com/eth-siplab/AvatarPoser/blob/71abcc6e60b599c3dc0148df26c4c8e0a951e937/models/model_avatarposer.py#L173

jiaxi-jiang commented 2 years ago

Hi, the pretrained model is improved but should only have a slightly better result than the paper, with MPJPE around 3.6. What is the result of the model trained by yourself? As your reported number using the pretrained model is similar to what I get from someone else, I am not sure if the dataset from AMASS has changed after the submission of our paper. I will double check that. Thanks for telling me that.

wdanc commented 2 years ago

Hi, the pretrained model is improved but should only have a slightly better result than the paper, with MPJPE around 3.6. What is the result of the model trained by yourself? As your reported number using the pretrained model is similar to what I get from someone else, I am not sure if the dataset from AMASS has changed after the submission of our paper. I will double check that. Thanks for telling me that.

The model I trained gets 2.2, and I downloaded the AMASS dataset two weeks ago. Could you please let me know the results of your double check?

dulucas commented 1 year ago

Any updates for this issue? Same here, I got 1.98/2.51/24.11 using the provided model. @wdanc could you please share your accuracy tested using your pretrained model? thx

wdanc commented 1 year ago

Any updates for this issue? Same here, I got 1.98/2.51/24.11 using the provided model. @wdanc could you please share your accuracy tested using your pretrained model? thx

The model I trained myself with 3 input got 2.91/3.92/26.50.

jiaxi-jiang commented 1 year ago

Hi, it seems that I uploaded a wrong pretrained model which was trained on another data split, so please just retrain the model from scratch.