Open ioctl-user opened 2 years ago
Also, it seems, there is no example how to run face emotions recognition only in the README.md file.
Sorry but at this moment, we are not using Docker and I am a little busy with other projects to create it. If I get some free time I will try to implement it, but for now, the repository is provided as it is, sorry. But if you create a new docker version, please, let me know and I will include it into the README to let other people have access to it. :)
Regarding face emotion recognition, I don't know if you are speaking about the 'static models' or the 'sequential models'. For the static models, you can see an example in the comments at the header of the file MMEmotionRecognition/src/Video/models/staticModels/FeatureTrainingAUs.py (or in the README, as you say). And for the sequential model, I have attached a new README in: MMEmotionRecognition/src/Video/models/sequenceLearning/README_AUS.md I added a new README because the running of this model is more complex and I think it was easier to follow in a different README.
Thanks!
My current problem is in running emotion recognition using downloaded models.
I have downloaded and unpacked all RAVDESS video in the MMEmotionRecognition/RAVDESS directory. And also models.zip was unpacked to the MMEmotionRecognition/models directory.
I have input video input.mp4 and picture file input.jpg.
What commands should be run to recognize emotions in these files?
Could you please provide Docker file to create needed environment?
I have downloaded project in the pytorch/pytorch docker, but has to fix some problems inside.
Also it will be useful to get instructions or script in Dockerfile how to download and build Openface project, which is dependency.