Closed arlindohall closed 8 years ago
@arlindohall I just added the first optimization to the LDA code. However, I noticed that LDA was performing much worse than normal against the ORL database. Whereas it normally yields 50-70% accuracy, it currently runs at 5-15%. This degraded performance is not caused by the optimization; when I ran the code without any of my modifications, it still ran at 5-15%. You should test the LDA code before you pull my commit to see this for yourself.
I'm really not sure what caused this sudden change in performance because lda.c
hasn't really been modified since the summer, so this might be a good issue for you to explore. I can't really spend any more time on this because I need to continue work with the scripts and Palmetto.
I'll figure that out this weekend and try and let you know why by Monday.
Miller Hall Computer Engineering Clemson University 2016
(Sent from my Phone)
On Oct 6, 2016 13:31, "Ben Shealy" notifications@github.com wrote:
@arlindohall https://github.com/arlindohall I just added the first optimization to the LDA code. However, I noticed that LDA was performing much worse than normal against the ORL database. Whereas it normally yields 50-70% accuracy, it currently runs at 5-15%. This degraded performance is not caused by the optimization; when I ran the code without any of my modifications, it still ran at 5-15%. You should test the LDA code before you pull my commit to see this for yourself.
I'm really not sure what caused this sudden change in performance because lda.c hasn't really been modified since the summer, so this might be a good issue for you to explore. I can't really spend any more time on this because I need to continue work with the scripts and Palmetto.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/CUFCTFACE/face-recognition/issues/10#issuecomment-252032973, or mute the thread https://github.com/notifications/unsubscribe-auth/AFr9PlVh25V0e8mvENBYdak2MocG1iKWks5qxTBUgaJpZM4J8GFf .
@bentsherman after pulling the most recent changes and adding a line to do the second optimization, I'm getting percentages similar to where I started. Would you mind pulling and running again with the parameters:
And let me know what sort of values you get? I'm getting between 50 and 80% with those parameters.
./scripts/cross-validate.sh
-p orl_faces -e pgm -t 3 -i 10 --lda
Next step will be to turn the value I added (n_opt2) into a parameter as you have done with n_opt1, and then begin looking at how I can automate comparing these two. I may also have to look at logging the result values somehow so that it's easier to compare performance.
Finding a new dataset takes a second seat until this is resolved.
Scratch that last part, I just ran again and got between 90 and 95%. This is weird because my team is getting between 40 and 80%. I have no idea why this is the case, as our image sets are identical, the code we are using is identical, and we both rebuild from source before running this trial. Does anyone have any idea why this is the case?
LDA Parameters in place. Next step, use timing to collectd performance data.
what is the accuracy looking like?
Getting similar to previous values. 80-90-97%
Nothing changed with the last push logically, just added ability to change values from command line.
On Oct 24, 2016 16:45, "ctargon" notifications@github.com wrote:
what is the accuracy looking like?
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/CUFCTFACE/face-recognition/issues/10#issuecomment-255860937, or mute the thread https://github.com/notifications/unsubscribe-auth/AFr9PiTjUMeHbFOQvGI3mIXFliM8V5HPks5q3Rj7gaJpZM4J8GFf .
The LDA Algorithm is running at a lot lower efficiency than PCA and ICA. After talking to Jesse, I've decided that the problem might be the nature of LDA to work better with a larger sample space, compared to the number of dimensions per sample. With this in mind, I've changed the goals of the LDA team as follows:
1) Try running LDA with a new dataset, such as MNIST, which has a smaller number of pixels per image and a greater number of images and see if the accuracy improves. This may be difficult as it is hard to say whether a training set for digits, for example, is "easy" compared to a training set for faces. This step should be done in a week or two.
2) Once this is complete, make a judgment that the MATLAB code is sufficiently correct, and finish the work on the LDA C code. This step should not take more than a week or two as well.
For now the plan is to use the following:
http://yann.lecun.com/exdb/mnist/
I will update the team tomorrow (Wednesday) to let you know if this dataset proves to be useful or if another would be preferred.