Hello, and good evening. Very concrete work on the paper as well as the approach. 👍
I note the following from the paper
The best feature vector to each attribute is obtained form the designated region via a combination of three shape and color features which are: RGB-Histograms, HOG [19] and LBP [20]. The combination is selected empirically to extract the best feature vector for each attribute. Multi-class SVM classification model using LIBSVM [21] is adopted here for training and classification after dimensionality reduction of the extracted feature vectors using PCA [22].
So, face++ framework is used to identify the regions of interest.
But how do move from regions of interest to the classes of the facial attributes?
Did you also create an intermediate dataset to tag the images with the facial attributes?
Hello, and good evening. Very concrete work on the paper as well as the approach. 👍
I note the following from the paper
So, face++ framework is used to identify the regions of interest.