Are you performing image alignment prior to calculating the opticalflow? I found some code on the repository related to alignment and did not quite understand what technique you used for the alignment. I am trying to see how your models perform on uncontrolled environment data i have, so i want to know, if i should align before running the models?
Why are you calculating the mean face?
How are you performing the view point classification?
We do not perform any kind of facial alignment whatsoever. That code you are referring to is (as far as I remember) related to some ablation study. For the optical flow we just use the original BP4D images without face cropping, and then we crop the OF with respect to the RGB bounding box.
We calculate the mean face just as in vgg-imagenet mean and std statistics.
In the FERA17 challenge, they provided a dataset with different viewpoints, so this classifier is just a regular classifier based on the training set groundtruths.
Are you performing image alignment prior to calculating the opticalflow? I found some code on the repository related to alignment and did not quite understand what technique you used for the alignment. I am trying to see how your models perform on uncontrolled environment data i have, so i want to know, if i should align before running the models? Why are you calculating the mean face? How are you performing the view point classification?