wangguanzhi / LADN

This is the implementation for Local Adversarial Disentangling Network for Facial Makeup and De-Makeup
https://georgegu1997.github.io/LADN-project-page/
181 stars 26 forks source link

How did you train the svm model for classification of facial attributes? #15

Open theunalivepool opened 1 year ago

theunalivepool commented 1 year ago

Hello, and good evening. Very concrete work on the paper as well as the approach. 👍

I note the following from the paper

The best feature vector to each attribute is obtained form the designated region via a combination of three shape and color features which are: RGB-Histograms, HOG [19] and LBP [20]. The combination is selected empirically to extract the best feature vector for each attribute. Multi-class SVM classification model using LIBSVM [21] is adopted here for training and classification after dimensionality reduction of the extracted feature vectors using PCA [22].

So, face++ framework is used to identify the regions of interest.

  1. But how do move from regions of interest to the classes of the facial attributes?
  2. Did you also create an intermediate dataset to tag the images with the facial attributes?
  3. What did the training of SVM take as input?
georgegu1997 commented 1 year ago

Hi,

Thanks for your interest in our work! Could you point out where you found the above sentences?