ESanchezLozano / Action-Units-Heatmaps

Code for BMVC paper "Joint Action Unit localisation and intensity estimation through heatmap regression"
85 stars 26 forks source link

Disfa Partition #10

Closed Sunner4nwpu closed 3 years ago

Sunner4nwpu commented 3 years ago

Hello, thanks for the sharing codes

Could you please tell me the partition of the Disfa dataset you used in the journal parper "A Transfer Learning approach to Heatmap Regression for Action Unit intensity estimation"?

Thank you very much

ESanchezLozano commented 3 years ago

Hi,

Thanks for your enquiry. For DISFA we did a three-fold cross validation reporting the results of the aggregated predictions ensuring no overlapping. The code for that paper will be released sometime soon.

The partitions were randomly split and kept fixed, resulting as follows:

Fold 0: Train: SN024, SN004, SN023, SN032, SN018, SN030, SN003, SN031, SN013, SN010, SN011, SN005 Valid: SN012, SN025, SN017, SN002, SN016, SN021 Test: SN001, SN007, SN006, SN029, SN026, SN028, SN009, SN008, SN027

Fold 1: Train: SN003, SN031, SN013, SN001, SN007, SN006, SN029, SN026,SN028, SN009, SN008, SN027 Valid: SN024, SN004, SN023, SN032, SN018, SN030 Test: SN010, SN011, SN005, SN012, SN025, SN017, SN002, SN016, SN021

Fold 2: Train: SN010, SN011, SN005, SN012, SN025, SN017, SN002, SN016, SN021, SN009, SN008, SN027 Valid: SN001, SN007, SN006, SN029, SN026, SN028 Test: SN024, SN004, SN023, SN032, SN018, SN030, SN003, SN031, SN013

Sunner4nwpu commented 3 years ago

Hi,

sorry, I have one more doubt.

Considering the N-fold leave-one-out cross-validation, the common method is to use N-1 folds as the training set and the rest fold as test set, the whole process is repeated for N times and the final reported result is the mean predictions as well as its standard deviation.

From your partitions, I find you do it in a different way by creating both the valid and test set. Did you mean on each fold you only train on 12 subjects, use the valid set (6 subjects) to select the best model and save the results of the test set? and finally you aggregate the 3 different test predictions (they corresponds to the whole dataset) and calculate one ICC?

ESanchezLozano commented 3 years ago

Yes, exactly, the validation set is used to select best model to report the results on the test set. Then the predictions of each set are gathered together for ICC computation.

Sunner4nwpu commented 3 years ago

ok, understood.

thanks a lot for replying.