Closed HyundongHwang closed 5 months ago
Hey @HyundongHwang for running a custom model like MoveNet Thunder, you'll have to use our new AI platform, Function. If you're interested, send me an email at hi@fxn.ai and we can get you setup.
VideoKit now uses Function to power all of its AI functionality, including human texture, speech-to-text, text-to-speech, and more.
@olokobayusuf I saw the answer today. Thank you for your kind reply. However, I need to infer in real time from the device, but fxnai is a server-side infer, so I don't think it will be a solution. Is that right?
@HyundongHwang Function does both on-device (realtime) and server-side (cloud-based) inference. For example, VideoKit now uses Function for both speech-to-text (server-side) and human texture (realtime, 30FPS).
hello. I'm making a human pose analysis app. I am using VideoKit's camera function and tflite model inference function well, and I am grateful to the developers.
I was making it using movenet lightning, and although it was a little slower, the inference quality was better, so I needed the movenet thunder version. I am a videokit core user, so I uploaded (movenet thunder model binary)[https://www.kaggle.com/models/google/movenet/tfLite/singlepose-thunder-tflite-int8] through the hub, converted it and used it as a predicator. I tried to register, but it failed as shown below.
Are there any other preparations or considerations when taking an external tflite model, converting it, and using it like me?
I also thought about it this way. Without using the AI function in VideoKit, I tested it to see what it would be like to use another solution, tf-lite-unity-sample . However, this caused the following crash in the Android version:
Perhaps, libtensorflowlite_xxx_jni.so is included redundantly, causing the problem. However, it built well and ran cleanly on iOS, MacOS, and WinOS.
So, I hope the developer can tell me how to register MoveNet thunder, or tell me how to build VideoKit without the AI function.