NumesSanguis / FACSvatar

An Open Source Modular Framework From Face to FACS Based Avatar Animation (Unity3D / Blender)
GNU Lesser General Public License v3.0
436 stars 99 forks source link

get face point #14

Open zhaishengfu opened 4 years ago

zhaishengfu commented 4 years ago

This is a great project. I wonder can I only get AUs from openface with zeromq? Can I get 68 face points from it?

NumesSanguis commented 4 years ago

Thanks for that ^-^ Currently this project only deals with AU values, not the face points.

If you want to have access to the face points, you have 2 options, modify the GUI C++ code or (for cross-platform support) attach a ZeroMQ component to the Core code in C:

  1. The ZeroMQ component added to OpenFace's GUI (limited to Windows) was done by my colleague, Huang. Currently it's only extracting the AU values, because that was necessary for this project. You will have to rebuild OpenFace's GUI. The instructions are here: https://github.com/NumesSanguis/FACSvatar/tree/master/openface To know what you should modify, search for "Huang" in the file MainWindow.xaml.cs. The code should be similar. You will need to convert the face points values to JSON.

  2. All calculations are in the Core code of OpenFace. Attaching a ZeroMQ component here would be ideal, because this parts works also on Linux (and Mac?). My colleague was more fluent in C++ and I'm more of a Python person, so we didn't go this way. So there is not much I can help you here with, but it would be great if you, or someone, could write a ZeroMQ network component her :)

May I ask you why you prefer the 68 face points over AU units?

zhaishengfu commented 4 years ago

Thank you for the help. I tried your methods and use my own cartoon model. Basically, the result is good. But I need more details of expression such as jawforward and so on. I find that this method can not drive some mouth animations. I want to use arkit 48 blendshapes and many of them can not derive only from AUs, so I want to get more information from original face points

NumesSanguis commented 4 years ago

The limitation is not the FACS itself. OpenFace only supports a subset of 18 AUs. In total there are around ~40, depending whether you count head rotations and such. More AUs can be seen here: https://imotions.com/blog/facial-action-coding-system/. FACSvatar has only 17 implemented, because the only proper open source toolkit available at that time was OpenFace. The goal for FACSvatar is to support all AUs, however.

FACSvatar relies on AUs, because they are software independent. For example, when you improve the AU tracker, no other code (in theory) would need adjustment. Using tracking dots directly, however, would require to map them to every character to get accurate visualization. For example, you mention that OpenFace has 68, but ARkit 48 tracking points. Using ARkit would require changes over the whole framework if we relied on tracking points. Therefore, FACSvatar hasn't been using 2D/3D points directly.

If you plan to convert those 48 dots to AU values, I would be glad to help :)

zhaishengfu commented 4 years ago

Thanks. Yes you are right, the complete AUs can get good results, but the problem is just like you said, there is no complete AUs detection system right now. And ARKit has 48 blendshapes, not 48 points. So It can get wonderful result(you can find apple phone and unreal example https://www.youtube.com/watch?v=MfnNJaVCLM8 , this is really a perfect result!). I believe that using complete AUs can get similar results, but not for current time.I have thought of many others methods, such as curve matching, MPEG-4 facial system, and all of them need original points. You said you could help convert dot to AUs, do you know any methods doing so? I think we need train some thing like expression feature to convert dots

delebash commented 4 years ago

zhaishengfu

Did you find a solution? Thanks