Closed SohilZidan closed 3 years ago
Hi there, You will need to train a model using the Researcher's Night train set (basically write something like https://github.com/Tobias-Fischer/rt_gene/blob/master/rt_bene_model_training/pytorch/rtbene_dataset.py for Researcher's Night) and then evaluate the trained model on their test set.
What you do is cross-dataset testing, using a trained model from the RT-BENE dataset and apply it to Researcher's Night. While this works well for the Talking Face dataset (see paper Section 4.2), Researcher's Night is too different from RT-BENE to lead to good performance.
Does that make sense? Closing here for now, feel free to re-open with more questions.
The Paper used Tensorflow, not Pytorch as well - there are implementation differences that matter.
Hello,
First question: can you provide more information on what you used for evaluation in Researcher's Night dataset, because you evaluated 105721 images, is it only the test split? I am trying to reproduce the results of the blinking paper, I used mediapipe to extract eye cutouts, then evaluated Researcher's Night (the whole dataset ~220k images), basically confusion matrix, but the results are bad
models ensemble:
if possible, can you provide the code for reproducing the results?