Closed johnConsidine closed 8 years ago
Have a look at the top of chapter7.py
, the process flow is explained there. After you recorded your own training set, which is stored in datasets/faces_training.pkl
, you want to train the MLP on it by running train_test_mlp.py
. This script will create datasets/faces_preprocessed.pkl
.
Chapter 7: Learning to Recognize Emotion in Faces
An app that combines both face detection and face recognition, with a
focus on recognizing emotional expressions in the detected faces.
The process flow is as follows:
* Run the GUI in Training Mode to assemble a training set. Upon exiting
the app will dump all assembled training samples to a pickle file
"datasets/faces_training.pkl".
* Run the script train_test_mlp.py to train a MLP classifier on the
dataset. This file will store the parameters of the trained MLP in
a file "params/mlp.xml" and dump the preprocessed dataset to a
pickle file "datasets/faces_preprocessed.pkl".
* Run the GUI in Testing Mode to apply the pre-trained MLP classifier
to the live stream of the webcam.
so your saying mlp will create it? thanks for the response btw
Exactly! You're welcome.
you are telling me to run in "training mode" and "testing mode", im sorry if this is a dumb question but i just see options to "run" on spyder, i dont get these modes...
also, where is datasets/faces_training.pkl??? the only files in datasets are "init" and "homebrew"
Hi, as mentioned above, running chapter7.py
in training mode will create datasets/faces_training.pkl
. Running train_test_mlp.py
will convert datasets/faces_training.pkl
to datasets/training_preprocessed.pkl
.
Regarding the GUI issue, you should see "Train" and "Test" buttons at the bottom:
Click on "Train", select an emotion such as "happy", then smile into your webcam and click "Take Snapshot". Keep taking snapshots of all emotions so that you'll get a good training set (I'd say, at least 10 snapshots per emotion). Then when you close the GUI, all snapshots will be dumped to datasets/faces_training.pkl
.
However, there seems to be a bug in the Windows version of wxPython (see issue #4), which messes up the GUI. I'm working on it and will post an update as soon as I have found a workaround.
ok thanks for the help :)
Hello Michael,
Is it possible to train MLP (params/mlp.xml
) using multiple datasets/faces_training-*.pkl
files?
Basically I want to do continuous Training
and Testing
in multiple sessions.
Hi guys, I just created a Google group that would be better suited for discussions. Issues are more for bug reports... Maybe we can discuss options over there?
agree.
ok just let me know if you get the gui working
it keeps freezing when i try to run it... when it runs only a picture of me shows up and where the buttons should be its not coming on
ill email you
Reportedly, this issue has been resolved. Future readers/users, you are welcome to discuss issues and seek help in our brand-new Google group.
Hello Michael Is this code suitable for linux , i just run the code in ubuntu and ,the test button can not be clicked, it is very strange. It just says that there should be at least one input
Hi XDUSPONGE, I know you're aware of the answer by now, but I'm posting this here for the sake of others: This issue has been discussed in the Google Discussion group.
In short, you need to generate a training set first by following the workflow I mentioned in the second post above from Feb16. This excerpt is taken from the source code of chapter7/chapter7.py
. A more detailed explanation and step-by-step guide can be found in the book in Chapter 7.
If you have more questions or run into any issues with this, it's easier to ask in the Discussion group. Thanks!
why is this coming up? there is no faces_preprocessed class in the folders of codes