Closed yy19931029 closed 4 years ago
You can obtain the AU/pose embedding according to the encoder. After the training process, you only need the encoder. I'll add the description and the python script for testing.
Thank you very much!
Hello, can you tell me how to transform the AU from size of 1 1 256 to AUs, thank you~
Given an input facial image, you can obtain the 'emotion_feature' from the trained TCAE model.
The original size of 'emotion_feature' is 1 1 256, just squeeze it to 256. Then train the AU classifier.
Hello, thanks for your help, I successfully trained my model, that is a .pth file, but I don't know how to use it to get the head embedding and AU embedding, I read your paper, the result of AU and pose is a list in size of 1 1 256, how to switch it into AUs and pose.