mysee1989 / TCAE

Self-supervised Representation Learning from Videos for Facial Action Unit Detection
162 stars 27 forks source link

How to test my model? #7

Closed yy19931029 closed 4 years ago

yy19931029 commented 4 years ago

Hello, thanks for your help, I successfully trained my model, that is a .pth file, but I don't know how to use it to get the head embedding and AU embedding, I read your paper, the result of AU and pose is a list in size of 1 1 256, how to switch it into AUs and pose.

mysee1989 commented 4 years ago

You can obtain the AU/pose embedding according to the encoder. After the training process, you only need the encoder. I'll add the description and the python script for testing.

yy19931029 commented 4 years ago

Thank you very much!

yy19931029 commented 4 years ago

Hello, can you tell me how to transform the AU from size of 1 1 256 to AUs, thank you~

mysee1989 commented 4 years ago

Given an input facial image, you can obtain the 'emotion_feature' from the trained TCAE model.

The original size of 'emotion_feature' is 1 1 256, just squeeze it to 256. Then train the AU classifier.