DeepMotionEditing / deep-motion-editing

An end-to-end library for editing and rendering motion of 3D characters with deep learning [SIGGRAPH 2020]
BSD 2-Clause "Simplified" License
1.58k stars 256 forks source link

Style transfer seems to reduce the sampling rate #164

Open rookboom opened 3 years ago

rookboom commented 3 years ago

Applying style transfer examples, seems to reduce the sampling rate. For example, trying to apply a 'childlike' walking style to a neutral running animation to get a childlike running animation as result:

python style_transfer/test.py --style_src style_transfer/data/xia_test/childlike_01_000.bvh --content_src style_transfer/data/xia_test/neutral_13_000.bvh --output_dir style_transfer/demo_results/childlike_running

The style animation has about 230 frames. The content animation has 118 frames. The resulting animation only has 28 frames but covers about the same distance as the input animations. The frame time does seem to be adjusted so it should play at a similiar speed as the originals. The sampling rate is just much lower... Why such a sharp reduction in the number of frames? Is there a way to control the amount of frames in the output animation?

The same happens when trying to apply a 'depressed' animation style to a 'neutral running' animation content. The animation speeds up because there are fewer frames in the same time.

LeVan146 commented 3 years ago

Yeah, because the downsampling motion by 4 (118/4~28) at here: https://github.com/DeepMotionEditing/deep-motion-editing/blob/3c1fe78b17ac78fee9c3048d22eafd5b02b940a4/utils/animation_data.py#L388

So I think you can check where they use this function, and set downsample parameter again.