google-ar / sceneform-android-sdk

Sceneform SDK for Android
https://developers.google.com/sceneform/develop/
Apache License 2.0
1.23k stars 604 forks source link

ArCore - 1.7 Face detection Api for All classification and landmarks. #563

Open roshanpatel6040 opened 5 years ago

roshanpatel6040 commented 5 years ago

Hi, According to new version, ArCore provides face detection api to render 3d model on faces with 468 point 3d mesh. But i want to detect eye and iris position and currently 3 region types are supported. Also i am unable to get 468 points from their native methods and how 468 points are categorized.

Using dlib i am not able to make detection and drawing accurate it fluctuate so much.

Any help will be appreciated.

rantaoca commented 5 years ago

Hi,

I believe this API doesn't give you any vertices about eye/iris position.

You can get the points from the native methods using the ArAugmentedFace_getMeshVertices() method. You will need to pass in a valid ArAugmetedFace pointer with a tracking state that's AR_TRACKING_STATE_TRACKING. It will return a pointer to 468 * 3 floats.

If by "how 468 points are categorized" you mean where each point is on the face, I've personally found this tool pretty useful in visualizing vertices in Blender. You can visualize these points by opening up the canonical_mesh file provided in the sceneform examples, and enabling the indices visualizer. It looks something like this: screen shot 2019-02-26 at 8 01 58 pm

Hope that helps!

roshanpatel6040 commented 5 years ago

Hi @rantaoca , My aim is to find the Iris and need to render according to it but as i can see these 468 points doesnt include the portion of Iris. If i create the 3d fbx model according to mesh and place eye at the center of the eye points then it will not be accurate. Even i have used opencv to detect Iris It also fluctuates. Created own dataset to find Iris points but not so accurate.

Thanks for the help and guide me for the next process.

roshanpatel6040 commented 5 years ago

Hi @rantaoca , Lets consider a point 67 which is on top left forehead. FloatBuffer buffer = augmentedFace.getMeshVertices(); Node node1 = new Node(); // Normal Node node1.setLocalPosition(new Vector3(buffer.get(201), buffer.get(202), buffer.get(203))); node1.setParent(node); // AugmentedFace node This point places model at 297 position similar with other points it changes it position. What could be the reason due to front cameras inverse view ?

Abhishek284 commented 5 years ago

I have the exact same issue. I am trying to find the distance between the two pupils but not able to get the Vector3 points on the eyes.

rantaoca commented 5 years ago

@roshanpatel6040, The image you see above is the unmirrored model, as if you were looking at another person infront of you. When you render this through the ARCore projection matrix, it will be mirrored as if your phone is a mirror and you are looking at yourself in a mirror. Sceneform does this for you automatically, and you can find the ARCore documentation in the getViewMatrix().

@Abhishek284, From looking at the 3d model, it looks like the indices you might be interested in are: Outside corner of right eye: 33 Inside corner of right eye: 133 Outside corner of left eye:263 Inside corner of left eye: 362

You can calculate the IPD from these vertices. Note that "right" and "left" are defined from the perspective of the person whom the mesh belongs to, which means when you render it on yourself it will be flipped as if looking into a mirror.

roshanpatel6040 commented 5 years ago

Hi @rantaoca , Thanks for this mirror theory. As you mentioned these points are of the eye corners. Do you have any approach to keep the track of the Iris from this mesh points or any other approach?

Abhishek284 commented 5 years ago

@rantaoca Thanks for the feedback. How to create/find the Vector3 point at the given index/vertex? e.g. If I would want the vector3 point at the outside corner of the right eye with index 33. I am assuming that the indices are obtained from augmentedFace.getMeshVertices()

@roshanpatel6040 I see that you have created a Vector3 point above. new Vector3(buffer.get(201), buffer.get(202), buffer.get(203)) Could you please help me in understanding use of three different indices used to generate a Vector3 point?

roshanpatel6040 commented 5 years ago

@Abhishek284 There are total 468 points in a mesh that means there are total of 4683 pointer float. Imagine you want to get position at 0 so the vector3 will be x=buffer.get(0) y=buffer.get(1) z=buffer(2) First three floats represents a single position. Now Consider a point 67, multiply by 3 673=201 so the vector for position 67 is new Vector3(buffer.get(201),buffer.get(202),buffer.get(203)); Hope it helps you.

Abhishek284 commented 5 years ago

Thank you @roshanpatel6040 for helping out with the vector calculation.

liuhangb commented 5 years ago

Hi,

I believe this API doesn't give you any vertices about eye/iris position.

You can get the points from the native methods using the ArAugmentedFace_getMeshVertices() method. You will need to pass in a valid ArAugmetedFace pointer with a tracking state that's AR_TRACKING_STATE_TRACKING. It will return a pointer to 468 * 3 floats.

If by "how 468 points are categorized" you mean where each point is on the face, I've personally found this tool pretty useful in visualizing vertices in Blender. You can visualize these points by opening up the canonical_mesh file provided in the sceneform examples, and enabling the indices visualizer. It looks something like this: screen shot 2019-02-26 at 8 01 58 pm

Hope that helps!

@rantaoca Blender can export the index of grid points that have been triangulated in the order of drawing?

ManuelTS commented 5 years ago

No documentation at all from Google about the mesh index positions on the face on the very method retriving the mesh indices, which is just horrible. The image of @rantaoca or a similar one ought to be linked on the documentation of https://developers.google.com/ar/reference/java/arcore/reference/com/google/ar/core/AugmentedFace#getMeshVertices() to help each single programmer to use AugmentedFaces more efficiently. This is common sense in the simplest way possible, but the decadence of today's society seams to effect the ARCore team too... I wasted nearly 2 months time of 8 h a day because the lack of the above picture.

Adrian-Jablonski commented 5 years ago

Does anyone have a detailed example that they could share of how you would put an image or 3d model on any of these points? I found a very descriptive example for ARKit that uses indexes to put emojis on vertices of a users face (https://www.raywenderlich.com/5491-ar-face-tracking-tutorial-for-ios-getting-started) but can't find anything similar for ARCore. The only example for ARCore I found was the AugmentedFace sample provided by Google but they seem to be just placing an object on the center of the users face and not specifically on any of the 468 points?

roshanpatel6040 commented 5 years ago

Hi @Adrian-Jablonski You can get the vector position as i have shown above. Simply create a node and whatever renderable you want to place render it on that local position. As of now only three regions are provided as you can see in documentation else for that three regions you need to create your own 3d model using sceneform mesh. Read the documentation on how you can create the 3d model according to mesh.

Adrian-Jablonski commented 5 years ago

@roshanpatel6040 Thank you for the quick response. I am still a little lost because I am currently trying to build my first Augmented Reality app and am using the Augmented Faces Sample from Google as a guide (https://github.com/google-ar/sceneform-android-sdk/tree/master/samples/augmentedfaces) since it is the closest sample to what I need to build. What you are saying is also exactly what I need but I am unsure of where to place the code. I am assuming it would replace this piece of code that renders the fox face. // Load the face mesh texture. Texture.builder() .setSource(this, R.drawable.fox_face_mesh_texture) .build() .thenAccept(texture -> faceMeshTexture = texture);

But I think there is a part of the code missing because augmentedFace in augmentedFace.getMeshVertices(); is missing a variable deceleration.

If possible, could you further point me in the right direction on how to get any vector position on Google's augmentedfaces example? Also, If I have to create my own 3d face model, will it still have the same 468 points in the same locations?

AlexandrLenivenko commented 5 years ago

Hi everyone, I have the same problem and don't know how to find lips.

ManuelTS commented 5 years ago

The picture from @rantaoca was very helpful, but lacks in depicting all detailed mesh indices around the eyes and the mouth. Hence, I made an dedicated github repo to do exactly that: https://github.com/ManuelTS/augmentedFaceMeshIndices

One example image is: Left_Eye_shading

rvgtesting commented 5 years ago

Hi, anyone can guide me to find the ear lobe position? I've tried with 177 and 401 points. Using these points I can able to render the 3D model near the ear. But not exactly in the right position.

Is there any alternative way to find the ear lobe position?