DeepMotionEditing / deep-motion-editing

An end-to-end library for editing and rendering motion of 3D characters with deep learning [SIGGRAPH 2020]
BSD 2-Clause "Simplified" License
1.58k stars 257 forks source link

Video extract #131

Closed Hellodan-77 closed 3 years ago

Hellodan-77 commented 3 years ago

If we use our own video dataset, or directly use openpose to extract the 2D joint positions to generate a json format file, can we replace the original json file and directly perform style transfer? How many bone points does the json file extracted from your video dataset contain?

HalfSummer11 commented 3 years ago

Thanks for your interest! Our code deals with json files generated using OpenPose with --hand option. Here we convert the raw OpenPose output to a 2D skeleton corresponding to the CMU skeleton we used in training. Note that you may want to set reasonable mean and std poses and replace the default mean/std pose here.

Hellodan-77 commented 3 years ago

Thank you very much for your reply! I would like to ask what do you mean and std? And how did you get the NPZ format file you used in the folder 'data/ readmill_norm/test2d.npz'?

HalfSummer11 commented 3 years ago

During training, our motion input is normalized ((X-mean(X))/std(X)) before being fed into the network. Here mean(X) and std(X) are computed over the training dataset. Ideally mean(X) would be an "average pose" w/o bias from any style. The normalization step is also done for the test inputs. Our test2d.npz is computed over all extracted 2D skeletons in this youtube video using the code here

Hellodan-77 commented 3 years ago

Question 1: I want to experiment with my own video, because I am not very familiar with openpose, you already have a very mature and powerful operation technology process, so can you please share the specific method of extracting video joint points using openpose or related websites? ? Question 2: The result of your style transfer synthesis action is a BVH format file. Is there any visual code? Just like the generated result on your article homepage? Could you please share with me? Thank you very much!

HalfSummer11 commented 3 years ago

Sure. The way to get json files is to simply run

./build/examples/openpose/openpose.bin --video examples/media/video.avi --hand --write_json output_json_folder/

as specified in the OpenPose repo For BVH visualization, you can directly use blender for a quick visualization. For rendering, please refer to the relevant section here in our repo.

Hellodan-77 commented 3 years ago

Thank you very much for your reply! Does the visualization of the BVH file generated in the demo_results folder require a GPU? Is it necessary to run the code under Linux operating system?

HalfSummer11 commented 3 years ago

You're welcome :) GPU is not required. But the visualization code is only tested under Linux & maxOS. I'm not sure whether it works under Windows.

Hellodan-77 commented 3 years ago

Which version of Openpose did you use to extract 2D nodes of human body and then generate JSON files? Are you running under Windows? Do you need a GPU? Which step of “https://github.com/CMU-Perceptual-Computing-Lab/openpose ” did you follow? Could you tell me more about that? Thank you very much!

HalfSummer11 commented 3 years ago

We used OpenPose 1.5.1, but I don't think the version matters here since the output formats are the same. The latest version should also work. We ran it on Ubuntu w/ a GPU. The step is basically the following in my previous comment. There are no other steps other than this.

./build/examples/openpose/openpose.bin --video examples/media/video.avi --hand --write_json output_json_folder/

If you go over OpenPose's readme you should find a similar script here. For more details, you can consult their doc.