DeepMotionEditing / deep-motion-editing

An end-to-end library for editing and rendering motion of 3D characters with deep learning [SIGGRAPH 2020]
BSD 2-Clause "Simplified" License
1.56k stars 256 forks source link

Can't we do style transfer on two people with different skeletons? #103

Open sunbin1357 opened 3 years ago

sunbin1357 commented 3 years ago

For Motion Style Transfer, I found you content_src and style_src in test demo is from a same person.
for example,

python style_transfer/test.py --content_src style_transfer/data/xia_test/sexy_01_000.bvh --style_src style_transfer/data/xia_test/depressed_18_000.bvh --output_dir style_transfer/demo_results/comp_3d_2

Can't we do style transfer on two people with different skeletons?

kfiraberman commented 3 years ago

You are right. The two applications we implemented here (style transfer and motion retargeting) are currently independent. Motion style transfer requires that the skeleton of the source and the target animation will be similar. Combining the two applications into one system that can transfer styles when the skeleton structures are different is a great direction for future implementations, but we do not plan to release such a version in the near future.

sunbin1357 commented 3 years ago

Is it possible to combine these two applications?What is the difficulty in combining these two applications? Are there any papers that combine the two applications? I hope you can provide some references if you have. Thank you very much.

sunbin1357 commented 3 years ago

is your paper(Learning character-agnostic motion for motion retargeting in 2D) a combination of style transfer and motion retargeting

kfiraberman commented 3 years ago

"Learning character-agnostic motion for motion retargeting in 2D" is retargeting in 2D (only the first application). Ideally, you would have a system that decomposes animation into 3 parts: motion, skeleton, and style. I'm not aware of works that tackle these two problems within one system. I think that the main challenge here is to collect labeled data that contains different skeletons that perform motions in diverse styles. Working with two different datasets (one for retargeting and one for style) in one framework may be possible, but it's not trivial at all.

sunbin1357 commented 3 years ago

I'm new to the field of motion retargeting and style transfer right now.

For your first application motion retargeting, I think of this task as retargeting from 3D to 3D, which is intra/inter structure motion retargeting. For "Learning character-agnostic motion for motion retargeting in 2D", I think of this task as retargeting from 2D to 2D. For Motion Style Transfer, can I think of this task as retargeting from 2D to 3D, which is intra-structure motion retargeting?

Furthermore, I want to do a study, which is to implement motion retargeting from 2D to 3D, which is inter-structure motion retargeting. In other words, I want to implement a motion imitation where our 3D skeleton could do similar motion by referring to a 2D video. Are there any related work available? Thank you very much!

kfiraberman commented 3 years ago

I'm not aware of works that can directly retarget 2D motion from video to a given character. Section 5.2 if this paper tries to suggest such an application.

Generally speaking, with existing methods you could reconstruct 2D to 3D (paper), then retarget 3D to 3D.

sunbin1357 commented 3 years ago

I'm not aware of works that can directly retarget 2D motion from video to a given character. Section 5.2 if this paper tries to suggest such an application.

Generally speaking, with existing methods you could reconstruct 2D to 3D (paper), then retarget 3D to 3D.

the simplest method that retarget 3D to 3D may be inverse kinematics (IK). What are the pros and cons of IK versus your proposed motion retargeting?