athn-nik / teach

Official PyTorch implementation of the paper "TEACH: Temporal Action Compositions for 3D Humans"
https://teach.is.tue.mpg.de
Other
383 stars 40 forks source link

The rendering code #33

Closed yangzhao1230 closed 1 year ago

yangzhao1230 commented 1 year ago

Hi, I wonder could you provide the rendering code or a link where I can learn how to render?

athn-nik commented 1 year ago

The demo offers a way to render to meshes via the visualize_meshes function. However, you should refer to TEMOS for instructions on how to set up and use the blender renderer. This was the work that did it first and took it from there. However, there are a lot of changes I did to make it work for many actions, different colors etc. So the code structure is similar but the code is very different on many parts so I cannot guarantee that it will work easily. The code is already in teach/render/blender and the main script to call this is render.py and the way that you should call it is (for example): blender --background --python render.py -- npy=/path/to/npys mode=video text_vid=false quality=true res=high.

You can follow this advice if you want to learn this immediately. Eventually, once some things get out off my way, I will sit down and write detailed documentation about this.

yangzhao1230 commented 1 year ago

I'm sorry for reopening this issue. I finally have time to try rendering. It seems that there is no entry point in the render.py file you mentioned, only some pre-defined functions.

yangzhao1230 commented 1 year ago

I have set up the blender render. I would greatly appreciate it if you could let me know if the current code repository can directly render the effects in your paper!

athn-nik commented 1 year ago

Can you try my command above? The npy must contain a dictionary with 3 lists. One with the length of each motion called lengths one with the relevant texts and one with the vertices of the motiona. You can create this npy by changing a bit my demo that saves an npy with rotations and those other lists. Instead ask it to save vertices by specifying jointstype vertices. There is an entry point and this is the render.py script. Also you have to have blender installed (3.1 should work.)

yangzhao1230 commented 1 year ago

Thanks for your quick reply! I misunderstood earlier and used the render.py in teach/render/blender. Now I have successfully run the render code. However, the rendered results seem to have a problem. I selected two .npy files generated in the test set and found that the rendered results collapsed into a mass. image

yangzhao1230 commented 1 year ago

@athn-nik My command is 'blender --background --python render.py -- npy=experiment/samples_slerp_aligned_pairs/checkpoint-last/val/163-0.npy mode=video text_vid=false quality=true res=high', and the .npy file is gotten from 'python sample_seq.py folder=/path/to/experiment align=full slerp_ws=8'.

yangzhao1230 commented 1 year ago

@athn-nik I'm sorry for not using vertices as the ouput, and the problem has been solved. One last question, I would like to know how to display different frames of human motion on a single image as shown in your paper. It seems easy to implement, but I'm wondering if your code provides this functionality? It would be helpful to visualize the motion this way if I'm unable to provide videos to others. Thank you so much for your work and help!

athn-nik commented 1 year ago

Hello @yangzhao1230, yes the code supports it if you set mode=sequence :) Sorry for the delayed responses i am very busy unfortunately. The collapsing into a mass has to do with the scale of the skeletons. mmm vs amass. Just use vertices and you are good to go, otherwise smplh joints.

yangzhao1230 commented 1 year ago

mode=sequence will generate multiple PNG files, is there a way to put multiple action sequences in the same figure like the first two figures in your paper?

athn-nik commented 1 year ago

try separate_actions=false and let me know.

lucasjinreal commented 1 year ago

@yangzhao1230 hello, my generation are all 9114, 21,3) like in npy which seesm to be xyz rather than rotations, how to get the rotations anyway?

athn-nik commented 1 year ago

@yangzhao1230 let me know if you managed it. @lucasjinreal I have replied about the rotations in the other issue. Let's move it there. You can see how I run the demo which gives you vertices. Inside the forward pass to get the vertices I do a forward pass from SMPL using the SMPL rotations and translation generated by the model.

lucasjinreal commented 1 year ago

@athn-nik can u share me a link? currently my main issue is can not get rotations from model, it's xyz coordinates.

athn-nik commented 1 year ago

Issue #39 that you opened.