xbpeng / DeepMimic

Motion imitation with deep reinforcement learning.
https://xbpeng.github.io/projects/DeepMimic/index.html
MIT License
2.27k stars 484 forks source link

Skills From Videos #53

Open zdlarry opened 5 years ago

zdlarry commented 5 years ago

Find not work well when I use the theta from the vision-based pose estimators to make a mocap data, is it necessary to perform additional operations on theta besides the operations mentioned in the paper SFV?

highway007 commented 5 years ago

@Dz97313 How did you make a mocap data, I could get a mocap data with hmr like SFV.But I can only get the 3d position not quaternion,can you get the quaternion?

xbpeng commented 5 years ago

Which video are you trying to imitate? what does the reconstructed reference motion look like? If the pose estimator is not able to generate a good reference motion, then imitation learning will likely not work well.

zdlarry commented 5 years ago

@highway007 I get the quaternion from the theta contained in the hmr's output. And How can you get the 3d positions from hmr ? Have Your 3d positions transform into world's axis?

zdlarry commented 5 years ago

@xbpeng I imitate the cartwheel video. But I did not re-trained the hmr model with your method that enhancing the images data, I simply rotated the upside down person to a normal one to predict the rotation params cause I thought the rotation params supposed to be same. But Hmr Model can not give a good predict one here and Openpose Model miss some joints infos at times. So can you help me out of trouble, THX!

highway007 commented 5 years ago

Hi@Dz97313 From this line in hmr demo : joints, verts, cams, joints3d, theta = model.predict(input_img, get_theta=True) I got joints3d and it's localposition(not world) .Also I see: pose = theta[:, 3:75] # This is the 1 x 72 pose vector of SMPL, which is the rotation of 24 joints in axis angle format Did you transform the rotation of 24 joints to quaternion?

zdlarry commented 5 years ago

@highway007 I indeed transform the the pose to quaternion directly, by the result is not good. Sometimes the axis of rotation is disordered.

highway007 commented 5 years ago

@Dz97313 Hi, Can you tell how did you transform it to quaternion?(and there are some joints like knee only have 1D,how to transform it) I also wonder the way to get the root world position. Thx ;)

xbpeng commented 5 years ago

Yes, the coordinates for hmr are different from the ones in deepmimic. When retargeting the motion to the character, you should visualize it with args/kin_char_args.txt to make sure things are retargeted properly. Else the policy doesn't really have a chance of learning the right motion if the ref motion is wrong.

Zju-George commented 5 years ago

Yes, the coordinates for hmr are different from the ones in deepmimic. When retargeting the motion to the character, you should visualize it with args/kin_char_args.txt to make sure things are retargeted properly. Else the policy doesn't really have a chance of learning the right motion if the ref motion is wrong.

Yeah, but the question is, how to retarget, regardless of the bind pose(T pose)'s different, the joint's count is different too... How to possibly retarget? what code do you use? could you please share if you have one.

Zju-George commented 5 years ago

@highway007 I indeed transform the the pose to quaternion directly, by the result is not good. Sometimes the axis of rotation is disordered.

How's your work going? Did you find a good solution of retargeting? Please, I am on the same boat.

yjc765 commented 4 years ago

@xbpeng There is no file called kin_char_args.txt under the folder args, is there?

xbpeng commented 4 years ago

sorry about that, kin_char_args.txt has been renamed to play_motion_humanoid3d_args.txt. I will fix the readme.