Open tahafkh opened 3 weeks ago
Hi,
Thanks for your interest in this work !
For now, I can think of two possible explanations for this :
1) I think the random seeds that are set in the beginning of the "lip_reading.py" script actually may not cover all random initializations. If you run this script multiple times, do you see any difference through each different iteration ? If so, maybe we failed to fix all randomness and results may vary a bit in each training cycle. If this is the case, maybe try another seed and increase the number of epochs.
2) I am unsure whether the pre-processing of the dataset in this repo fully reflects what is described in the article. The other students I was working with at the time worked with multiple pre-processed versions of the DVS-Lip dataset. So this discrepancy may be due to an issue with pre-processing.
I hope this helps for now.
Hello,
I'm trying to reproduce the results on the paper, but I can't reach to 60.2%(my highest test accuracy is around 57%). Do you have any ideas about the source of problem? I'm using the exact code in your repo.
Thanks for the help!