evonneng / learning2listen

Official pytorch implementation for Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion (CVPR 2022)
106 stars 10 forks source link

How to render the output of this project to DECA #6

Closed liangyishiki closed 1 year ago

liangyishiki commented 1 year ago

@evonneng Hi! You indicate that raw 3D meshes can be rendered using the DECA renderer. Could you tell me how to deal with your output result (pkl files) so that it can be the input to DECA? It seems that DECA can only take images as input. Meanwhile, there are only 3 parameters(exp,pose,prob) in pkl files, is it enough for DECA to generate output?

evonneng commented 1 year ago

Thank you for your question. Given the 3 parameters, you should be able to modify this function: https://github.com/YadiraF/DECA/blob/master/decalib/deca.py#L160-L262 such that your output parameters are fed in as the code dict. Hope that helps!