evonneng / learning2listen

Official pytorch implementation for Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion (CVPR 2022)
106 stars 10 forks source link

Render the output of this project to DECA #12

Closed FortisCK closed 1 year ago

FortisCK commented 1 year ago

Hi, @evonneng There are 3 keys(exp,pose,prob) in pkl files, but the decode function in DECA needs ['shape', 'tex', 'exp', 'pose', 'cam', 'light'] keys. How to use these three keys to correspond to them?

nguyenntt97 commented 1 year ago

Not the author myself, but you could use the demo_transfer.py in DECA for FLAME 3D rendering. Just override the exp and pose features to id_codedict["pose"] id_codedict["exp"] and used the decode function to render.

Other params could be inherited from a random image (as we didn't have those params in L2L dataset).

Another option is to find the original video and extract the missing params from a frame from said video I guess.