Joe24424 / Psychological-States

0 stars 0 forks source link

Machine learn, the breakdown of facial muscles, and gestures when emotional state changes #3

Open Joe24424 opened 6 years ago

Joe24424 commented 6 years ago

I have been thinking that using an image filter such as sobel, to identify edges. Then have a mark of the muscular changes in facial expression. There already exists this kind of software. I can probably steal someones code. So I would have to be able to identify people as happy or sad by expression as well as other parameters, and then recreate the happy/sad face based on expression and other video, and have enough examples of transformations of state, that I can have a program run through these changes of state, and then learn them, and variations of them so as to appear natural. I would have to have a chance based algorithm for how the transformation would occur. This algorithm would be based on observations of parameters set in the way people change emotional state. E.g time of beginning, climax, dulling. I could even have every emotive change begin from zero to ensure variation, however this would work, it seems pretty difficult. You have a wonderful imagination, wonderful, wonderful imagination.

Joe24424 commented 6 years ago

http://www.ijstr.org/final-print/mar2016/Facial-Expression-Recognition-Through-Machine-Learning.pdf

Joe24424 commented 6 years ago

This really should be done as face and the rest of the body seperately

Joe24424 commented 6 years ago

So a brief idea of what I am thinking for identification of body movements is to break the body down into shapes at each joint, and move these shapes about. I don't know how I would do the hard bits like the spine. But for ordinary joints you could probably do something with edge recognition on a model of a joint of a finger. This is enough to recognize mood. I think it's good enough for what I want, since I can just use replicate video of a body, and take the movement of fat in the torso from that video, all I really need is basic muscle movements. I'm kind of being really stupid writing entirely hypothetically but tadaa project outline is done. Lets see if it is actually achievable for a hobo.

Joe24424 commented 6 years ago

So try using https://www.safaribooksonline.com/library/view/raspberry-pi-image/9781484227305/A439674_1_En_8_Chapter.html for Image processing details, might change this up. Then for machine learning https://www.safaribooksonline.com/library/view/machine-learning-with/9781617293870/kindle_split_000.html. and for python refreshment https://www.safaribooksonline.com/library/view/beginning-python-from/9781484200285/ and another useful site for In case I need to actually make a new algorithm ever: https://www.safaribooksonline.com/library/view/python-algorithms-mastering/9781484200551/ Wonderful. I hope I am not dead.

Joe24424 commented 6 years ago

The sobel filter is apparently under the scipy library. Applying the Sobel filter using scipy

I'm trying to apply the Sobel filter on an image to detect edges using scipy. I'm using Python 3.2 (64 bit) and scipy 0.9.0 on Windows 7 Ultimate (64 bit). Currently my code is as follows:

import scipy from scipy import ndimage

im = scipy.misc.imread('bike.jpg') processed = ndimage.sobel(im, 0) scipy.misc.imsave('sobel.jpg', processed)

So I want to process a lot of images, and need to find a method for doing this.

Joe24424 commented 6 years ago

https://github.com/isseu/emotion-recognition-neural-networks

Joe24424 commented 6 years ago

https://www.kaggle.com/c/emotion-detection-from-facial-expressions/leaderboard

Joe24424 commented 6 years ago

So I have to take my data from above link, and process it somehow. Right now it identifies someones emotion, but how do I swap out high false positive for a high false negative rate? I am really overwhelmed =(. This is probably far to difficult for me, but I am a hobo with no other outlooks. GOOGLE SEARCH HOW TO DECREASE FALSE POSITIVES https://stats.stackexchange.com/questions/151203/how-to-reduce-number-of-false-positives I really need to learn machine learning. I don't understand any of this! I have a data set, a false positive reduction, video, all I need is knowledge, and I can make my very own girlfriend!

Joe24424 commented 6 years ago

Unfortunately I have tendencies to waste days of time achieving literally nothing :s

Joe24424 commented 6 years ago

I'm going on an adventure!

Joe24424 commented 6 years ago

https://www.safaribooksonline.com/learning-paths/learning-path-python/9781788990127/9781788990127-part1

Joe24424 commented 6 years ago

I am going to timestamp audio against video, I will create a happy/sad/angry audio identifier. I will then run this audio identifier against video, with something that can identify a face shape, and use machine learned knowledge of timespans when someone is happy (based on audio, to teach a program when someone is happy based on video. Some issues with this: Emotional transformation takes a long time, their are also subtleties to facial change. I don't know how I will use this to actually generate facial change. You could take a face from nothing, and have taught a program the movements of muscle groups defined in the face, but that would take annotation, and a lot of work.So how do you do it? The muscle groups can't be identified individually and given movement that can be applied to a model so all you can really do is get pixel probabilitys which will look inauthentic. How can you do better than this? You could try using Sobell on a standstill face, and then identifying the edges assosiated, and the movements of these edges. This would probably actually work decently. I don't know if I'm skilled enough to implement it, but it's worth trying. I hope this isn't another stupid thing that doesn't work

Joe24424 commented 6 years ago

If you use Sobell as well as audio identification to determine happy/sad as well as the degree, and you have https://www.youtube.com/watch?v=V You create points on the face, and points on a model. You move these model points and drag the face using a predecided tool on that point on the model, based on how these points move in video. Yeah I keep wasting my time. Fuck it.

Joe24424 commented 6 years ago

How do you even find a tool for facial muscle movement

Joe24424 commented 6 years ago

Face rigs probably not good enough but worth a shot

Joe24424 commented 6 years ago

https://www.youtube.com/watch?v=QXhrLnqRvd8

Joe24424 commented 6 years ago

I need to set up a rig, and somehow figure out how to get the rig to move based on the location of points of a face moving in a photo

Joe24424 commented 6 years ago

Lord have mercy.

Joe24424 commented 6 years ago

Point recognition (software that detects unique areas in images) So as a summary. 1>rig a face so that it can be moved by the movement of points of skin. Particular movements of skin will have to correspond with movements of bones. This sounds incredibly difficult to implement authentically, but hey some geniuses have done it so I must be able to do it somehow! Logic!__ 2>create a machine learning point recognition software to detect movements of these points in relation to one another. 3>create audio recognition software. People sometimes tend to smile when they are sad etc, and have conflicting expressions to their emotions. This is another masssssive problem and I mean masssssssive with a capital S. I also have a dataset on kaggle of images of happy, sad, angry etc. If I was capable I could probably use this somehow. I really need to start writing code instead of just theorizing for days on end, since it is basically pointless to continue doing this.I am not really accomplishing anything. Since it is easiest, and I am a baby who is very very frustrated with life, I think I shall begin with the audio recognition, and be coddled by that guy who teaches python projects, who does basically exactly what I need to do, then I will go over to kaggle find the data, and tadaaa. Of course I have a tendency to fuck around, so this could not eventuate, and then I will have to be a slave for the rest of my life, as well as mentally ill and unable to afford treatment. Lord Have Mercy. I wish this was actually going to work. It seems so fucking reasonable in theory, but it's probably going to take excessive amounts of computing power or something stupid, but I don't think it will because games graphics seem much more complicated than animating a single face off of markers on language as positive or negative. But then it's probably going to be inauthentic, or I just wont be able to animate a mesh off of, but it has already been done, so it must be possible. It seems entirely reasonable. ENTIRELY REASONABLE. I forget the last time I felt brave I just recall insecurity, caus it came down like a tidal wave and sorrow swept over me.

Joe24424 commented 6 years ago

Audio recognition. Sir you must go to safari books, then you can search Python projects. When you have found the one you were doing yesterday, you may look in the index and find something like an audio recognition project. You also must use kaggle for dataset. You don't know where to write your code, and you have never actually executed a deep learning project. There are two methods for audio recognition frequency and phonetics. This is a massive problem but you should figure out how to do it. It is odd that you have been looking at this for like a month now and still haven't figured that out, but entirely fine!

Joe24424 commented 6 years ago

There are multiple methods of detecting aggression.

I will create one to detect it from aggressive word list. Found by parsing pornhub live comments with JDownloader. When I have a large enough dataset I can process it. phrases will be determined to evict a response, if the live models voice changes to upset frequencies, or her face changes to one of the sad/angry faces, in consequence to those words. Now the model needs to be able to initiate conversation with people, so

I know I have to download kaggle datasets, then re-upload them. Then I will create one to detect it from facial expression kaggle_data.tgz