Walter0807 / MotionBERT

[ICCV 2023] PyTorch Implementation of "MotionBERT: A Unified Perspective on Learning Human Motion Representations"
Apache License 2.0
1.02k stars 123 forks source link

Something about train. #42

Closed Ared521 closed 1 year ago

Ared521 commented 1 year ago

Thank you for your great work. I have a question for you as follows. I see that there are three training sections in the doc folder, which are pretrain, scratch and finetune. Is there any connection between these three? If I focus only on 2D key points to 3D key points, which one should I focus on? Thank you very much and look forward to your answer.

Walter0807 commented 1 year ago

Hi, thanks for your interest. It depends on your need. Finetuning usually takes less time.

Ared521 commented 1 year ago

Hi, thanks for your interest. It depends on your need. Finetuning usually takes less time.

Hi, thanks for your reply, now I am trying to study train.py, I would like to ask the following lines of code under the function train_epoch: if args.rootrel:

batch_gt = batch_gt - batch_gt[:,:,0:1,:]

else:

batch_gt[:,:,:,2] = batch_gt[:,:,:,2] -batch_gt [:,0:1,0:1,2] # Place the depth of first frame root to 0.

Here the root joint is set to 0,0,0, and the other joints are denoted by their relative root joint positions, so why not do the same for batch_input? Shouldn't x,y as the input in the DSTformer network be the same as x,y in gt? Hope to get your advice, thank you very much!