Closed imerse closed 5 years ago
Hello, I replaced the underlying face key detection module,
FYI, we are in the process of updating the detection + landmarks via TVM, so that should support more SOTA CNN modules for devices with Metal or Vulkan GPGPU capabilities.
I'm not sure I fully understand exactly what you want to do. Maybe you can provide a little more detail. What functionality are you looking for exactly?
Do you just want to use the mobile facefilter
app and OpenGL components but with a different face tracker?
Do you want to run a different detection + landmark module and still use the eye model?
In general, you can plug in a different face detector and landmark regressor by creating your own version of the FaceDetectorFactory class, which can abstract allocation of these pieces.
I guess detector + landmarks can be the same in some cases.
Here is the constructor for the filename oriented base class. If you want to do something quickly you can inject your own models here as long as they adhere to the returned base class types in the factory.
This stuff is currently hidden (by design) from the public API, so you'll have to modify the internal libs to achieve this. Since the core detection and regression modules change so quickly, the public API should probably be updated to accommodate this sort of modularity. I'll think about this as part of the next overhaul, since I think your use case is fairly common.
Finally I want to package the model file with the source code file. This is convenient for use on Android and iOS. I hope to get your advice. Thanks.
Again, I'm not 100% clear what you want, but essentially you can probably do what you need with one of the following.
add_subdirectory()
GIT_SUBMODULE
.If you want to produce a single shared lib or dynamic framework to drop into another project you can use the DRISHTI_BUILD_SHARED_SDK
flag. For Android that will create a single *.so
file and for iOS it will create a dynamic framework. That sounds like it might be what you are looking for.
Thank you for your detailed reply. My goal is to change the underlying face key detection, but leave the part of the eye model regressor. Because the previous eye tracking often drifts, I think the reason is that the key points of the face are unstable.
Now I have replaced the face detector and face landmark regressor modules. But there is a static binary model file in the new keypoint detection module. The demo on the pc side is no problem, because I can read the absolute path directly when loading the model file. But I don't know how to compile this file into the *.so file, and the code should load the binary file. This may be a problem with the sugar cmake
configuration, I don't know how to configure it.
Indeed I need to package it into a single shared lib. My current handling is a bit stupid. I compiled the android example and unzipped it to get the .so file. I know that there is a parameter such as DRISHTI_BUILD_SHARED_SDK
. Can you give a command example for compiling Android lib? Thanks.
How do I compile a binary model file like this into the android lib *.so? At the same time, what path should I use to load them?
I checked, cmake can't package binary model files into static lib? Maybe I should still put the model file in another static directory and load it by reading the file stream.
How do I compile a binary model file like this into the android lib *.so? At the same time, what path should I use to load them?
CMake doesn't have a built-in method for compiling binary files to source code, AFAIK. The facefilter
app already has a mechanism for managing assets. You should just be able to use that.
For the facefilter
demo app you can look at FaceTrackerFactoryJson. That uses generic streams and will work for iOS and desktop builds. For Android there is a FaceTrackerFactoryJsonAndroid variant that uses an AAssetManager
wrapper to provide the required std::istream
inputs. That class just reads a top level json format asset index, and pulls out resources by key name.
The drift you noticed is probably caused by the temporal component. There is a simple Hungarian assignment step that is used internally and can be controlled using drishti::Context::set{MaxTrackMisses,MinTrackHits}. The detector recall can be adjusted using setAcfCalibration.
I know that there is a parameter such as DRISHTI_BUILD_SHARED_SDK. Can you give a command example for compiling Android lib? Thanks.
It depends on how you are building. You can either:
1) Build a library directly from a top level CMake command and use that in some other project. It would look something like this:
CONFIG=Release; TOOLCHAIN=android-ndk-r18b-api-24-arm64-v8a-clang-libcxx11; polly.py --toolchain ${TOOLCHAIN} --config-all ${CONFIG} --install --verbose --reconfig --fwd DRISHTI_BUILD_EXAMPLES=ON DRISHTI_BUILD_SHARED_SDK=ON
2) Build the facefilter
app managed from Android Studio (gradle)
Both of those are covered in the project README.
We are working on improving the detection stability. The challenge is to get it working at frame rate without much processor load while still supporting full volume detection for hand held phone use.
Hello, I replaced the underlying face key detection module, now there is a model file, how should I compile, how to read the model file? Finally I want to package the model file with the source code file. This is convenient for use on Android and iOS. I hope to get your advice. Thanks.