diablodale / dp.kinect2

Kinect v2 (XBox One) plugin for Cycling '74 Max
21 stars 4 forks source link

Unable to get HDFace variables (HD facepoints, shapeunits, points3d) #14

Closed milou645 closed 8 years ago

milou645 commented 8 years ago

Computer: Windows 10 x64 - Core i7 - 16GO Ram - GTX970 Kinect SDK Version : KinectSDK-v2.0_1409 Max : 6.1.10 x64 registered dp.kinect : 1.0.1.0 trial

Hi ! I was testing your external for an artistic installation, and I encountered problems with the HD Face from the Kinect v2, and the properties that come from the HD Face I manage to get all the other attributes I need (face/1/bounds, face/1/pose/scale position and translation, face/1/animunits....) But attributes related to the HDFace do not come from the output of the object ("face/1/points2d" when the "face2dpoints" is set to 3="v2 hidef", face/1/shapeunits, face/1/indices and face/1/vertices even if "face3dmodel" is set to 4="v2 vertices+indices")

and here is the attributes of the dp.kinect2 object : @faces 1 @faceprop 1 @face2dpoints 3 @face3dmodel 4 @facesuau 2

The kinect20.face.dll, microsoft.kinect.face.dll, and nuidatabase are all in the same folder as the dp.kinect2 external

Thanks in advance if you have any idea

Emilien

diablodale commented 8 years ago

Please consult https://github.com/diablodale/dp.kinect2/wiki/Message-based-Data#face-tracking the bold paragraph at the top and the remaining whole section on face data.

I can't emphasize enough how slow and difficult it is to get the HD face data. This is a Microsoft issue and they consider it working as designed. :-/ The Kinect v1 and dp.kinect have much faster face shape detection..if you have a brightly lit room.

As in the docs, you must do the slow face rotations, you can not have a beard, you must have a "normal" shaped face, you can not have face deformities, etc.

As I write in the docs, watch for the modelstatus message to track your success or failure on this data.

Your setup is installed correctly and you are getting a flow of face data. I know this because you are getting data like bounds and pose.

milou645 commented 8 years ago

Hi ! Thanks for the very quick answer :-) I read all the text on your (very well documented) wiki, but this part was not clear for me, since I've never seen a video of somebody capturing the model of his face (have you seen any video anywhere ?)

Aside from that, I'm working a lot on kinect code in C++ and C# and some things remain unclear to me, and here is the other information I can provide :

Thanks again and congrats for your documentation ! Mil

milou645 commented 8 years ago

Some news ! Ok so I managed to do a capture, both in the HDFaceBasics and with your external So here is what I got from these two situations

Mil

diablodale commented 8 years ago

I'm not able to provide support for the Microsoft examples. I recommend you try the Microsoft forum at https://social.msdn.microsoft.com/Forums/en-US/home?forum=kinectv2sdk

To my knowledge, it is impossible to get HD points without a completed model. Why? Because it is only with the model object that you can call into its interfaces to get the 3D points and then with other APIs map them into 2D space.

It might be possible (I haven't tried) to get HD points for a default model. There is probably a default face model somewhere in memory and you can probably ask it for 3D points. That is not the feature that my customers (so far) want. They want the 3D points of the actual human in front of the Kinect.

Shapeunits will never change. Once the model has been built and is complete, the shape units are fixed forever for that human. This is by Microsoft design and I agree with the design. A person's head does not change shape. Only the animation units change...they...animate.

I use C and C++ for the dp.kinect2 external. I will admit, the Microsoft documentation has room to improve. For my face code, I had to combine the Microsoft doc + forum posts of Microsoft employees to fully understand the behavior of their face tracking API.

A caution for you. The Microsoft API will do whatever you ask it. If you keep asking it to rebuild a model, it will rebuild a new model multiple times for a human. Your C/C++ code must control and end (re)building a model once it is complete. You also have to do all the face<->body tracking and handle transitions in/out of view of the Kinect.

diablodale commented 8 years ago

The output of the HD 3D points depends on the value of the @face3dmodel atribute on dp.kinect2. If you choose 4, then you will receive the vertices and indices. The indices almost never change. In most cases, you will only see the "indices" message once per human face tracked. It will appear immediately after the model is built. After that, only the vertices change. You use each frame's new vertices with the same indices. It is a lot of data and it never changes. Therefore, I don't repeat the same message 30 times a second. :-)

milou645 commented 8 years ago

Thanks for your answers !

I must admit I hadn't understood what the shapeunits were... In my case, I just need to get the appropriate data to start working on face analysis (I want to do statistics to see if it's possible to extract some emotional features from the face analysis I can do from the Kinect data)

So I thought it will be interesting to get :

(the variables in italic being the ones that are taken once for every person,and the ones in bold require capture)

so ok for your two answers about 3d points indexes and shapeunits

for the rest, I still have a very few questions (we should make an article on a blog somewhere ;-) )

Thanks again for all that information and perfect answers ! In addition to be helped when using your external, I feel a bit less lonely in trying to do interesting things with kinect v2 :-)

diablodale commented 8 years ago

Microsoft did not use anything Candide with the Kinect v2. It disappointed many. In addition, Microsoft didn't document well the vertices, shape units, or animation units. The best I've found so far is in Kinect.Face.h and the names of some enum's. Finally, Microsoft cautioned that they may change the shape units, face model, and animation units at any time in any update. To me, it seemed like they were almost cautioning people from using faceHD. So...beware. :-/

I write about the coordinate space in my wiki: @face3dmodel = 1 or 2 will enable output of a 3D model of the face in local face coordinate space. Therefore, the @distmeter does not apply to this set of data. This model and its coordinate space can be scaled, translated, and rotated using the 3D pose values described above. The 3D face models require a fully captured face; see modelstatus message.

I do not know about AU precision as it related to a built model. In my testing, you will get AUs before the model is built and sometime after it has identified the head's translation.

The Kinect v2 SDK does not allow for switching models. It only has its undocumented model and the internal code that likely uses learning/heuristics to build the model. That's can't be changed by us. With trial and error, you might be able to watch the v2 SDK AUs animate and find some math that could map them into Candide.

I vaguely remember a forum discussion about loading/saving face models. I haven't needed the feature. There is CreateFaceModel() where you can pass in the SUs from a previous built model. Then you will have a model but you can't animate it. The AUs are read-only. You can't feed them into the model to get 2d/3d points. You would only have a static face model that you can rotate, scale, translate.

I don't know your situation/case so I don't have a recommendation other than experimentation with the AUs Microsoft exposes and the named HD 2D points. You could draw the model in 3D and by hand pick out vertices of interest and track their value changes as you express changes in emotion.

Oh, and there are face properties. You get them very quickly and don't need a built face model. dp.kinect2 exposes these and they are on my wiki. That data might also help you in your research.