Open Hixie opened 1 year ago
I think this is a great idea. It would then be possible to use a lot of the same code as on iOS. Unfortunately there are not a lot of resources for using pytorch C++ api combined with Android.
I've been trying this the past few days, compiling Pytorch for android C++ and linking them in cmake, but with no luck. I couldn't get past a undefined reference
errors. If anyone had successfully managed to do it please tag me
@dvagala Did you try NativeApp from the official pytorch demo?
@cyrillkuettel Thank you! This was the right direction, I've tried it now, tweaked it a bit and it's working ❤️
My problem was that I was trying to follow this tutorial to build PyTorch from source to get the .a
libraries and link them in CMakeLists.txt
, but I was hitting a dead end.
For anyone with the same issue, here is how I managed to add native C++ LibTorch to FFI Android/Flutter:
The inference is significantly faster through FFI. Moreover, with bigger input tensors, I was getting an OutOfMemoryError
before, and now with the FFI it's totally fine.
The current version of the package:
Input shape (1, 3, 350, 350) - inference 886ms
Input shape (1, 3, 700, 700) - inference 3050ms
Input shape (1, 3, 2000, 2000) - inference OutOfMemoryError
Input shape (1, 3, 8750, 8750) - inference OutOfMemoryError
FFI: Input shape (1, 3, 350, 350) - inference 250ms Input shape (1, 3, 700, 700) - inference 314ms Input shape (1, 3, 2000, 2000) - inference 580ms Input shape (1, 3, 8750, 8750) - inference 12114ms
(I measured the time to make the model inference from Dart List
, and get the output to Dart List
, so the conversions to/from C++ data structures are already included in the measures. Testing ML model was LDC)
Please note, that I don't currently have time to rewrite this package and make a PR.
@dvagala Big if true! Wow! That are some drastic performance improvements. That's amazing I have to admit I did not expect that much of a gain. It seems that the platform/method channel was actually really slow. How did you solve it, did you do the per-processing in C++ as well? Can you share the code?
As a side note, your C++ might be suitable for iOS as well. (Since iOS is largely based on ObjectiveC) We might even be able to get rid of platform channels entirely.
@cyrillkuettel Sure! I've made an example project here
Yes, it's running on both Android and iOS, and all the code is shared. That's the beauty of Flutter :)
@dvagala I was trying to follow your steps but using the flutter create --template=plugin_ffi
command.
I couldn't make it work on iOS. Do you have any experience with it?
Update: It's ok now, I forgot to copy the .ccp file to Runner target.
Another question: It's possible to make a ffi plugin with s.static_framework = true
in podspec? The workaround makes it difficult to distribute the plugin. If not, there is a way to automate the file copy after Pods install phase?
@beroso did you find a fix for this?
@beroso Hi, I wanted to ask if you eventually did find a solution for iOS C++ files?
I have the time and motivation to rewrite this package to dart:ffi
. But I need to find a way to compile / include the C++ files. Right now s.static_framework = true
works for development purposes. But I need to add each file manually to Xcode, and we don't want users of the plugin to have to do that...
Hello, I did suffer from the same problem in my package, but the solutions mentioned to me were to create static lib with pytorch and the c++ ffi code , and use that in the iOS pod
Sadly my knowledge with iOS is very low, so I failed to do so, and reverted back to objc and java.
I Hope this information helps you.
Hello, thanks for the information. I believe I have found the solution to this problem. I did not yet test it, but it looks promising
Step 2: Building and bundling native code
plugin:
platforms:
some_platform:
ffiPlugin: true # if we set this flutter will bundle the C++ files
This configuration invokes the native build for the various target platforms and bundles the binaries in Flutter applications using these FFI plugins.
I believe I did try that, but anyway if it worked for you let me know, since ffi was more stable and more predictable and I wish to go back to it
Yes, stable, and you don't have duplicated inference code for each host platform. Also there are some neat tricks which I found in https://github.com/ValYouW/flutter-opencv-stream-processing so that with dart:ffi
we can actually have true shared memory solution. So one could imagine writing the input image to a buffer and reading it from the other side in C++.
Yes I did that, too and using the repo as a reference 😅.
The latest commit using ffi I could find in case that helps you https://github.com/zezo357/pytorch_lite/commit/b047edaf98b311518c031097fce35bd70b50814f
Fascinating. I have done a lot to same things in my own project (not open source yet, because I need to fix this distribution problem and clean things up)
Only just today I discovered your package 🙃
From what I know The only features I don't provide that exist in your package are the inference as a direct list of values
I have inference for yolov5 and yolov8 and used pigeon to make sure all null safe communications and async operations
I'm not the owner of this project :)
I'm not the owner of this project :)
Oh my bad😂 I didn't notice, if you were able to do it let me know, and if you want to make a pr I would love that
@cyrillkuettel https://github.com/zezo357/pytorch_lite/pull/65 i made this pr on the latest commit for ffi just to extract it from history incase you need to check it
also included the points needed to be fixed for the ffi to be used. https://github.com/zezo357/pytorch_lite/tree/latest-ffi is the branch with last edits for ffi
Currently this package on Android goes from Dart to Java via message channels, then from Java to C++ glue code via JNI, then from that C++ to the actual core pytorch library (and then all the way back). It would be more efficient if the Dart code used FFI to directly drive the C++ code.