Open roccodev opened 8 years ago
Hi. I would be glad to take a look at any pull request you do. I'm not sure I want the rectangles to look curved, although I am open to debate about that. The main reason everything is flat and square is because the new machine looks like that. I modeled the look of the facial recognition UI to look like the machine's interface in the open sequence of the show in season 5. Obviously there are enhancements that can be made, but the show now uses square edges. In all fairness though, the show is inconsistent about this, as the opening sequence does use a flat UI but a lot of UI during the show is based off of the old machine's interface. I think OpenCV (and JavaCV) use eigenfaces for facial recognition internally. The font that the machine uses is currently pretty badly rendered, but I think I will switch everything to the Samaritan font, as it is cleaner, and the machine uses something similar for its UI font (its terminal font is still the one we use). Also, for your question UI, are you using speech recognition? If so and if it is on the desktop, what speech library are you using? Let me know if I missed anything.
I tried sphinx (but it's only for english language), I tried J.A.R.V.I.S, which uses Google Speech API. It works in my language, but it doesn't recognize my words, it recognizes other words, even much different!
About Eigenfaces: you are using LPBH (or something similar), but, why is opencv_face.createEigenFaceRecognizer() there?
I've also changed the designation boxes to have rounded corners and crosshairs, because I want those of the show, and not of the opening sequence.
I'm new to Git, so, if I can't make a pull request, I will link that code via Gist.
That's how you initialize the facial recognition system (with opencv_face.createEigenFaceRecognizer()). I'm guessing all of them are based off of that system. Also, regarding the designation boxes, unless people feel extremely strongly about the designation boxes, I will leave everything square.
The designations are only images, so it's not a big deal if you changed that. I personally prefer a square designation, but each to their own I guess.
Also, please make a pull request. There was an earlier closed issue with information about how to submit a pull request ( #25 ) (https://github.com/the-machine-project/the-machine/issues/25). The entire thread talks about how to make a pull request.
Let me know if there is anything else.
The second is the SHELL
Please make these into a pull request regarding the earlier comment I made (#25).
Also, take a look (I think I'll make a pull request for that) at J.A.R.V.I.S Speech Synthesiser. I've tried it, it's functional and it's cross-language. It uses Google Now's voice, and it's useful to send numbers. Basically I converted the numbers of SSN to letters (0 = z, 1 = a etc.) and then converted them to NATO alphabet (a = Alpha, b = bravo etc.), so I made that voice speak that string, so my generated SSN is (working) HIHFHEFHF (898-68-5686).
I've just made a pull request including Questions UI and Speech Synthesiser
Quick questions about JARVIS:
Google's Speech API is limited to 100 request/project. Yes, you need one available at the Google API Console. Note, you must first subscribe to the chromium-dev group.
I recently updated my IntelliJ to Ultimate, so, if you need some web task, ee tasks, feel free to ask. Btw I'm working on an app version (for Android) with the Samaritan interface.
If there are API request limitations and it requires an API key, would this be suitable for an open-source project?
This is not something that needs to be focused on as much right now but i thought i would mention it just so that its on the list for potential upgrades.
Things to add user wise are Blue Box ( Members of government teams working directly for one of the Machine's assets while unaware of its existence) and White box with red corners and crosshairs( Individuals involved in imminent (or ongoing), irrelevant violence)
Also i think you should consider making it so that when a person appears on camera and the machine recognizes them (through facial detection) it should not show all of their information on the screen at once, i think it would be nice if (in the future when voice commands are impliminted) you can ask the machine "who is that" or "identify asset/threat" as of now it could be done with a command such as "identify asset/threat" and it would show their name, ssn, and all of the extra information.
Just a couple thoughts about cleaning up the interface and improving the user system not really that high on the priority list right now but I would like to see it happen eventually.
Thank you for your time!
I edited the code so when the machine recognizes a face, it shows only the box and a small colored rectangle which shows ASSET/THREAT/ADMIN etc. as of Season 5. I'm trying to implement that when the users press L it shows more informations. But it seems that I cannot draw the rectangle.
One thing I noticed when i was entering a command into the machine was that it tends to spaz out when you change your facial expression or even when my eyes are looking in a different direction it wont recognize me as Admin and that can be really annoying if you are trying to enter a command and you complete it and then you have to re-type it all because you weren't recognized as Admin.
First of all i would like to say that i know nothing about code and don't know if this is possible.
Now, my idea to fix this is, if i correctly understand the facial recognition process then that means, facial recognition (which as far as im aware is hard on your computer) and facial detection are two different things, so my idea is...
Once the machine detects you (using facial recognition) as either an ADMIN/AUX_ADMIN/ANALOG_INTERFACE/ASSET/THREAT the facial recognition for that person will no longer be active, so basically once it knows who you are it will lock FACIAL_RECOGNITION and only use FACIAL_DETECTION so that if you turn your head slightly or make different facial expressions it won't switch to SECONDARY because it is no longer constantly trying to figure out who you are it is just tracking you because it already knows who you are.
Some of the problems that might arise with this (that I have thought of) are: Q:If the machine does not recognize you instantly (when you enter camera range it shows you as Secondary when you could be Admin) which tends to always happen, the machine does not always instantly recognize a person and it stops trying to detect who you are and is just tracking you, wont it lock you out of the system because it is no longer trying to detect who you are?
A: yes that is true and the solution to this is that people who identify as Secondary will not have a user lock. which basically means that the machine is always trying to use facial detection on those who it identifies as a "secondary". So the way the machine identifies secondary people will not change from what it is doing now the only thing that needs to be changed would be that once the machine identifies you as ex. Admin and it switches you from Secondary to Admin THAT is when it would no longer try and identify you and just track your face, NOT before it knows you.
Q:What about once we have multiple camera support, if the machine identifies you as Admin in one camera and is tracking you once you leave camera 1's view and enter camera 2's view it will identify you as Secondary because it does not have the ability to predict where you will be next?
A: The solution to this would be, each camera would have to have different facial detection systems. So basically once "Admin" leaves camera one and enters camera two the machine will probably identify you as "Secondary" which means that it will always be trying to find out who you are. Once it realizes who you are in camera two it will go back to just tracking you and no longer use facial recognition.
Q:What if you are in a group of people and they are all secondaries and you are an Admin, lets just say that the machine loses track of you in all the people what does it do?
A: First i would like to say that unless you have a pretty beefy computer the machine will crash depending on the size of the crowd you are in. Now as for the answer it is very simple, the machine will treat you like a Secondary which is always attempting to use Facial Detection and once it recognizes who you are it goes back to tracking you. NOTE: Depending on the camera angles and the quality of the camera the machine might not be able to detect you from a certain distance which means that if the machine looses track of you in a crowd, you would have to get pretty close to a camera and look directly at it in order for the machine to identify you properly. This note is assuming that you are running the current version of facial recognition which has difficulty identifying you when you are only a foot away from the camera.
These are the only problems I have found so far as i said before I KNOW NOTHING ABOUT CODE i just thought it was a good idea and would solve a lot of problems, if you have an issue please feel free to add to the list and me or someone will try and answer it HOWEVER I would like to emphasize again that I will NOT be able to answer CODING questions!
@RoccoDeveloping: that sounds interesting. Can you attach a picture of what scene you are talking about from the show? All the code for drawing stuff on the image itself should be in FacialDetection.java. If you need more code-specific help, we can move this to the Gitter, as it may clog up the feed here.
Attaching it soon! This one: http://vignette2.wikia.nocookie.net/pediaofinterest/images/5/58/POI_0502_MPOV_Finch_Admin.png/revision/latest?cb=20160511002955 and this one: http://vignette2.wikia.nocookie.net/pediaofinterest/images/8/84/POI_0502_Finch_and_Root_Designated_as_Threats.png/revision/latest?cb=20160510202403
@RoccoDeveloping You might want to wait until Tuesday to commit anything, I'm planning on committing full screen UI changes then. Additionally, I am planning to make the box on the image clickable, so more information about the user can rendered to the side of the webcam view. Thanks!
I personally prefer having a key listener, with maybe a "L" key listener. That's because the box is always moving, so it'll be difficult to click. So you are implementing full screen webcam, aren't you? I wasn't planning to commit, but it's a good idea! I'll be waiting for yours before committing mine.
What if there are multiple people in the webcam? Which one would the "L" open? I am also planning on making it openable from the terminal, so input can still be typed.
It would open all the faces' box. And, it's better to say "You're being watched" or "We're being watched" than "Welcome to the machine"
Hmm, I'll see what you come up with. I'm not planning on committing the clicking functionality until after this commit, so we can both come up with something and compare.
Can you make the face recognition process a bit smoothier?
Note: I'm developing a local machine based on this one. In mine I developed a Question UI. Basically, it analyzes for keywords (e.g. "Show me the profile of Reese, John" - keywords are PROFILE, OF and the next word(s) (Reese, John). Maybe I can do a pull request. And please, use Futura as the interface font, and make rectangles like the one @ the-machine-project/the-machine-project in Surveillance Test. How about using Eigenfaces for face recognition?