av-savchenko / face-emotion-recognition

Efficient face emotion recognition in photos and videos
Apache License 2.0
686 stars 127 forks source link

question about training on my own data #39

Closed dao027 closed 1 year ago

dao027 commented 1 year ago

thank u for your sharing of your models and codes. in my work i want to train a 3-class face emotion recognition model(8 class is too much for me) on my own data using PYTORCH, and i hope can train my classifier base on enet_b0_8_best_afew.pt (just train classifier with backbone frozen) i really don't want to train from scratch O_O but i don't know how to train from it, can u give me some suggestions?

or Can u tell me which should i use of these traing codes below? because i can't tell the difference between them image

av-savchenko commented 1 year ago

Thanks for your interest! You can train your classifier similarly to conventional image recognition techniques with pre-trained neural nets. You have two options:

  1. Extract emotional features with on of my models, and then train a classifier from scikit-learn on top of this features. Example is available at train_emotions-pytorch-afew-vgaf.ipynb. Here the features are extracted from video frames, so I need to aggregate them into a single descriptor (see function create_dataset), but you could skip this step if you have a dataset with static photos.
  2. Fine-tune the model (head only or the whole model). Example is available at train_emotions-pytorch.ipynb. Just replace the cell starting from "model=timm.create_model('tf_efficientnet_b0_ns', pretrained=False)" to "model=torch.load('../../models/affectnet_emotions/enet_b0_8_best_afew.pt')", and use your own train_dir and test_dir
dao027 commented 1 year ago

Thanks for your interest! You can train your classifier similarly to conventional image recognition techniques with pre-trained neural nets. You have two options:

  1. Extract emotional features with on of my models, and then train a classifier from scikit-learn on top of this features. Example is available at train_emotions-pytorch-afew-vgaf.ipynb. Here the features are extracted from video frames, so I need to aggregate them into a single descriptor (see function create_dataset), but you could skip this step if you have a dataset with static photos.
  2. Fine-tune the model (head only or the whole model). Example is available at train_emotions-pytorch.ipynb. Just replace the cell starting from "model=timm.create_model('tf_efficientnet_b0_ns', pretrained=False)" to "model=torch.load('../../models/affectnet_emotions/enet_b0_8_best_afew.pt')", and use your own train_dir and test_dir

copy that~so many thanks!! OvO

av-savchenko commented 1 year ago

Closing due to inactivity