Open zoeyfyi opened 7 years ago
That's it? And it works?
Yep, 1 fps but it works!
Is there time to dump it in a thread as we had before?
Maybe... im thinking of rewriting a much simpler version with this new code. Instead of mapping people to names, we can give then a color so their is no need for training
All you need to do is find the difference between the two arrays and it give the singularity of those two frames!
Which is technically face recognition, and a lot faster to explain than retraining the network if some wants to see it work
I have just pushed a working copy on master, as long as it doesn't loose detection, the bounding box around a face should remain the same colour. If you can could you test it with your webcam?
I will do, just need to make sure the dependencies work.
What did we have to do to use the correct manifest?
I was also wondering, in the README we have
sudo apt-get install luarocks for NAME in dpnn nn optim optnet csvigo cutorch cunn fblualib torchx tds; do luarocks install $NAME; done
As torch includes its own luarocks, are we overwriting the manifest here?
Dependencies are installed. Just need to fix torch.
Uninstall luarocks. Cd into torch build bin Then run the installs from their
On 13 Mar 2017 9:00 pm, "Theo Pearson-Bray" notifications@github.com wrote:
What did we have to do to use the correct manifest?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/tpb1908/FaceDetector/issues/15#issuecomment-286242218, or mute the thread https://github.com/notifications/unsubscribe-auth/AChPdOqSRF97g5-bGY3yypTI7s2-ezfyks5rla55gaJpZM4Mbq0m .
The dependencies are installed, just getting " 'th' no such file or directory'" when net.forward is called.
Just needed to export th to $PATH, it is drawing a box now.
And it seems to be a consistent colour
Fantastic. If you checkout the test branch I have made a program to help amber demo it better, could you check that works fine. Press keys 1-4 for the diffrent modes.
One sec, still pushing...
OK, I will have a look. Moving the call to create the net outside of the loop improved performance slightly, but I still have a ~2.5-3 second latency between making a motion and it showing on screen.
Well this commits so large its not pushing...
What did you change?
This is what im going to send Amber:
In the new program you press the numbers 1 to 4 to switch between the different parts of face recognition. If you need to explain how the program works you can do the following:
Press "1" This is the bounding box mode. In this mode we draw a box around all the faces in the frame. To do this look at a 16x16 block of pixels and find its gradient (which way its pointing) to generate a HOG image. With this data we compare it against a HOG face pattern which is the combination of 100s of 1000s of HOG images to see if each part of the image is a face. Once we find a face we draw a box around it.
Press "2" With the position of the face we use the same HOG image to draw 68 points on every face called landmarks to help us with the next step.
Press "3" Usings the face landmarks we rotate, scale and shear the image so the face is fairly central, this helps to make the neural network more accurate since the faces look more simular.
Press "4" We pass this data through a neural network which has be pre-trained on almost 1 million photos. For every face we get a 128 int vector which describes all the features which make up your face. Now to recognise the faces between each frame we simply look at the faces from last frame which were most similar and match them up.
The problem is to ensure none of the data is missing I had to copy the data files outside of the data directory to ensure the git clone actually pulls them down.
I'm not sure if Amber will have time to clone it.
We could put it somewhere else online without the videos, as she already has those.
Now re cloning and pushing without the videos... wow this repos large
After the 100th push, it finally went through!
so long as nothing in the data folder changes, the clone script should work fine to pull the last commit from the test branch and then copy the files across.
They only files it needs are dlib_shape and nn4.small2.v1.t7 which are now in the root directory so it wont fail if the copy fails
Good, the copy just dumps the data folder from our backup into /FaceDetector.
If the copy fails, the videos won't work, but that should be it.
I rewrote the whole thing (apart from webcam.py) but I had to remove the videos to push it, so can you tell me if everything still works?
If you look at FaceDetector.py those 134 lines implements almost all the features we had before! If only we had looked at openface's docs sooner
It actually took ~2 seconds to checkout that commit
Fantastic!
Do we want to add this as a separate program from the main one?
This works, but to a judge it will look much less interesting than the full UI.
I'm going to see how it works as a filter.
I was thinking of just adding the tool bar to this one. The only thing this doesnt do is counting line.
Since we can split impostors so easily, the spy network, with realtime training, can become a reality now!
I just pushed to the test2 branch with a filter containing the detection stuff.
If the test2 branch works for you I can update the camera number and merge it into test, unless there is a more functional version.
I think it would be easier for amber to use if we kept it simple. Plus it doesn't really gain us much more functionality.
On Mon, 13 Mar 2017 at 23:26, Theo Pearson-Bray notifications@github.com wrote:
If the test2 branch works for you I can update the camera number and merge it into test, unless there is a more functional version.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/tpb1908/FaceDetector/issues/15#issuecomment-286275040, or mute the thread https://github.com/notifications/unsubscribe-auth/AChPdN9BFyUhto5gbyKNnRjVGipfV1I8ks5rldCIgaJpZM4Mbq0m .
That was what I tried to do. The whole recognition setup is just a filter now, and the keybindings still work.
Primarily I think the old stuff is needed just because she needs to demo it and the new stuff is really slow, and will be even slower on the laptop.
Well it didn't use that much cpu when I was testing it. Isn't it a lot slower inside the old code? On Mon, 13 Mar 2017 at 23:33, Theo Pearson-Bray notifications@github.com wrote:
Primarily I think the old stuff is needed just because she needs to demo it and the new stuff is really slow, and will be even slower on the laptop.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/tpb1908/FaceDetector/issues/15#issuecomment-286276312, or mute the thread https://github.com/notifications/unsubscribe-auth/AChPdLou2WarHATbrbDb945Svw_mWe85ks5rldIwgaJpZM4Mbq0m .
I think I'll refactor all of it tomorrow anyway. I'm worried something won't work if we merge test2. test branch works with 4 files and if I need to fix something over the phone it will be much easier if we don't merge. On Mon, 13 Mar 2017 at 23:35, Ben Sheffield bshefb@gmail.com wrote:
Well it didn't use that much cpu when I was testing it. Isn't it a lot slower inside the old code? On Mon, 13 Mar 2017 at 23:33, Theo Pearson-Bray notifications@github.com wrote:
Primarily I think the old stuff is needed just because she needs to demo it and the new stuff is really slow, and will be even slower on the laptop.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/tpb1908/FaceDetector/issues/15#issuecomment-286276312, or mute the thread https://github.com/notifications/unsubscribe-auth/AChPdLou2WarHATbrbDb945Svw_mWe85ks5rldIwgaJpZM4Mbq0m .
CPU usage for just the open face stuff was about 5% for me, getting ~5 FPS on just the bounding box and ~1 FPS on the recognition.
The performance didn't change for me on the filter implementation.
How do the two branches perform on your system?
I cant test it atm but the performance won't matter to much while she's demoing it as long as their are a few frames! On Mon, 13 Mar 2017 at 23:38, Theo Pearson-Bray notifications@github.com wrote:
CPU usage for just the open face stuff was about 5% for me, getting ~5 FPS on just the bounding box and ~1 FPS on the recognition.
The performance didn't change for me on the filter implementation.
How do the two branches perform on your system?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/tpb1908/FaceDetector/issues/15#issuecomment-286277371, or mute the thread https://github.com/notifications/unsubscribe-auth/AChPdAJSuuIf238xGAzKGe10d5lixCveks5rldONgaJpZM4Mbq0m .
test2 uses the same files as the other branch.
We could just have Amber clone each of them, the command is just
git clone [repo] -b [branch] -depth=1
If she gets it set up fine tommorow we can merge for the next day. On Mon, 13 Mar 2017 at 23:41, Ben Sheffield bshefb@gmail.com wrote:
I cant test it atm but the performance won't matter to much while she's demoing it as long as their are a few frames!
On Mon, 13 Mar 2017 at 23:38, Theo Pearson-Bray notifications@github.com wrote:
CPU usage for just the open face stuff was about 5% for me, getting ~5 FPS on just the bounding box and ~1 FPS on the recognition.
The performance didn't change for me on the filter implementation.
How do the two branches perform on your system?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/tpb1908/FaceDetector/issues/15#issuecomment-286277371, or mute the thread https://github.com/notifications/unsubscribe-auth/AChPdAJSuuIf238xGAzKGe10d5lixCveks5rldONgaJpZM4Mbq0m .
Sounds good. I assume that the place she is staying has internet access.
It pulled just fine but I used relative paths for the dlib-shape file so it won't run from the runner script! It should work if she cd's into FaceDetetector and runs the python file from their right?
On 13 Mar 2017 11:45 pm, "Theo Pearson-Bray" notifications@github.com wrote:
Sounds good. I assume that the place she is staying has internet access.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/tpb1908/FaceDetector/issues/15#issuecomment-286278460, or mute the thread https://github.com/notifications/unsubscribe-auth/AChPdAjq5zYgtJ-xeK9fz_SF256MFAJUks5rldUKgaJpZM4Mbq0m .
The clone script should CD into /FaceDetector and then run. I think the runner script must do as well, because it runs using data files which are in the /FaceDetector folder. On Tuesday, 14 March 2017 at 09:16 Ben Sheffield notifications@github.com wrote:
It pulled just fine but I used relative paths for the dlib-shape file so it won't run from the runner script! It should work if she cd's into FaceDetetector and runs the python file from their right?
On 13 Mar 2017 11:45 pm, "Theo Pearson-Bray" notifications@github.com wrote:
Sounds good. I assume that the place she is staying has internet access.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/tpb1908/FaceDetector/issues/15#issuecomment-286278460, or mute the thread https://github.com/notifications/unsubscribe-auth/AChPdAjq5zYgtJ-xeK9fz_SF256MFAJUks5rldUKgaJpZM4Mbq0m; .
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
Sounds like she cloned it without an Internet connection which cleared out the directory.
Turns out this is all you need to get the 128 dim vector
Now I just need to put it through the cascade....