Open andynewman10 opened 1 year ago
Well pytorch lite as far as I know doesn't use GPU,
And pytorch mobile didn't release 2.1 version as far as I know if they did I don't mind updating to it
Btw tflite should be faster if GPU is enabled
Yes 2.1 was released thanks for letting me know
https://central.sonatype.com/artifact/org.pytorch/pytorch_android_lite
Will try to update to it
You're welcome.
Yes this would be great if you updated.
TF Lite GPU support is way too limited. The GpuDelegate is very severely limited and does not work with most models. The NNAPI delegate, somehow also supporting GPUs (in addition to TPUs and DSPs - if I understood correctly), is supposed to work with more models but I can't get it working. It's a real PITA.
But I am not talking about GPU support: XNNPack is pure CPU. It is written by Google and accelerates operations on the CPU.
Do you know if it is supported and enabled on Pytorch? From what I can see, it is supported, but is it really enabled by default? See:
https://pytorch.org/mobile/home/
It's important to know because it can, on some devices, have a significant performance boost, not to be taken lightly.
Yes I saw XNNPack does exist on pytorch
But the model should be changed to allow it
And if model is exported with it it should work automatically
But the model should be changed to allow it
Can you tell me more about this?
https://mvnrepository.com/artifact/org.pytorch
Maven and Gradle artifacts updated 9 days ago.
But the model should be changed to allow it
Can you tell me more about this?
optimize_for_mobile
This does it for you from what I found
for ios latest one is 1.13.0.1 https://libraries.io/cocoapods/LibTorch-Lite/1.13.0.1
just to keep track of the versions
updated to latest pytorch packages in 4.2.2
integration tests are working so everything should be the same so i did increase the patch version
Somebody is mentionning the availability of LibTorch-Lite 2.1.0 here:
https://github.com/pytorch/pytorch/issues/102833#issuecomment-1820125165
I am wondering: why isn't this version advertised here https://libraries.io/search?q=LibTorch-Lite ? (I am not extremely familiar with iOS developer sites etc.) Is this worth updating?
Hello, for some reason I failed to make libtorch lite work but libtorch is working and I can't find libtorch v2 pod
So it's not updated and from my testing updating the android one didn't affect performance for better or worse, so as long it's not needed I don't think upgrading is important
If anyone needs it to be updated let me know
Ah, the struggles of Libtorch-Lite versioning. @andynewman10 I was wondering as well why Libtorch 2.1.0 was not advertised. I asked this in discuss.pytorch.org as well. The 2.1.0 binaries clearly exist: CocoaPods/LibTorch-Lite/2.1.0/ (On Android I was able to run Liborch-Lite 2.1.0
without problems.)
But on iOS, I get this strange crash on startup. (https://github.com/pytorch/pytorch/issues/102833) I spent hours trying to find out why this happened, without success.
But the model should be changed to allow it
What he means is if you train / export a model with (python) pytorch 1.13 for example, and then try to run inference on mobile with let's say pytorch 1.10, you'll probably get an error. (You have to use 1.13 on mobile as well)
But on iOS, I get this strange crash on startup. (pytorch/pytorch#102833) I spent hours trying to find out why this happened, without success.
@cyrillkuettel You only get a crash on iOS 12, right? Can you confirm things run fine on iOS 13 or later?
Can't know for sure. The iPhone I use for testing (iPhone 6) only supports up until iOS 12.
Have you tried your code with the iOS simulator? I would try it with iOS 12, and with a later iOS version.
I tried running on simulator. Still returns error, but different one.
flutter create --template=plugin_ffi --platforms=android,ios libtorch_test_version
added dependencies (in ios/libtorch_test_version.podspec
):
cd example/ios
pod install
open simuulator
open -a simulator
Run on simulator:
flutter run -d 8648E37B-49FB-476F-9B52-041F5F4D4FD
Eventually this returns this.
Invalid argument(s): Failed to load dynamic library 'libtorch_test_version.framework/libtorch_test_version': dlopen(libtorch_test_version.framework/libtorch_test_version, 0x0001): tried: '/Library/Developer/CoreSimulator/Volumes/iOS_21A5303d/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 17.0.simruntime/Contents/Resources/RuntimeRootlibtorch_test_version.framework/libtorch_test_version' (no such file), '/Library/Developer/CoreSimulator/Volumes/iOS_21A5303d/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 17.0.simruntime/Contents/Resources/RuntimeRoot/usr/lib/swift/libtorch_test_version.framework/libtorch_test_version' (no such file),
Honestly this is beyond frustrating. At this point I stopped, I don't care anymore...
@cyrillkuettel You must make sure that, in Xcode, Strip symbols
is not set to All symbols
, just Non-global symbols
. This is critical and might explain dlopen failures.
Mind you, in debug builds this should not be an issue.
Have you tried the code on iOS 13+? Say, iOS 15?
Yes i have set it to Non-global symbols
, forgot to mention this. Still same "failed to lookup symobol"
Screenshot from Simulator:
To be clear
1.13.0.1
, to be precise, works even with iOS 12I understand your frustration. Unfortunately, I'm not very good in iOS programming, so I cannot help you with this 'podspec debugging' issue. It looks like a file is not found, but the cause is unknown.
Do you know what the benefits of using 2.1.0 are over 1.13.x? Just curious.
I would not bother with 2.1.0 unless you need it for a very specific purpose.
Potential benefits:
How can I use LibTorch version2? I got a ViT model which use Attention seems like LibTorch 1.3 doesn't have the operation with this errors.
libc++abi: terminating due to uncaught exception of type torch::jit::ErrorReport:
Unknown builtin op: aten::scaled_dot_product_attention.
Here are some suggestions:
aten::_scaled_dot_product_attention
The original call is:
File "code/__torch__/timm/models/vision_transformer/___torch_mangle_1064.py", line 32
_6 = (q_norm).forward()
_7 = (k_norm).forward()
x = torch.scaled_dot_product_attention(q, k, v)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
input = torch.reshape(torch.transpose(x, 1, 2), [_0, _2, _4])
_8 = (proj_drop).forward((proj).forward(input, ), )
Message from debugger: killed
How can I use LibTorch version2? I got a ViT model which use Attention seems like LibTorch 1.3 doesn't have the operation with this errors.
libc++abi: terminating due to uncaught exception of type torch::jit::ErrorReport: Unknown builtin op: aten::scaled_dot_product_attention. Here are some suggestions: aten::_scaled_dot_product_attention The original call is: File "code/__torch__/timm/models/vision_transformer/___torch_mangle_1064.py", line 32 _6 = (q_norm).forward() _7 = (k_norm).forward() x = torch.scaled_dot_product_attention(q, k, v) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE input = torch.reshape(torch.transpose(x, 1, 2), [_0, _2, _4]) _8 = (proj_drop).forward((proj).forward(input, ), ) Message from debugger: killed
An update to libtorch is needed so I will look into it when I have time
edit: @luvwinnie latest version of pytorch is used https://github.com/abdelaziz-mahdy/pytorch_lite/blob/3ab7bc081c6dc19b0947f34f815974b9682e2d85/android/build.gradle#L64
same for ios https://github.com/CocoaPods/Specs/tree/master/Specs/1/3/c/LibTorch
@abdelaziz-mahdy It seems like my environment using Libtorch(1.13.0.1) with iOS?
PODS:
- camera_avfoundation (0.0.1):
- Flutter
- Flutter (1.0.0)
- LibTorch (1.13.0.1):
- LibTorch/Core (= 1.13.0.1)
- LibTorch/Core (1.13.0.1):
- LibTorch/Torch
- LibTorch/Torch (1.13.0.1)
- onnxruntime (0.0.1):
- Flutter
- onnxruntime-objc (= 1.15.1)
- onnxruntime-c (1.15.1)
- onnxruntime-objc (1.15.1):
- onnxruntime-objc/Core (= 1.15.1)
- onnxruntime-objc/Core (1.15.1):
- onnxruntime-c (= 1.15.1)
- path_provider_foundation (0.0.1):
- Flutter
- FlutterMacOS
- pytorch_lite (0.0.1):
- Flutter
- LibTorch (~> 1.13.0.1)
DEPENDENCIES:
- camera_avfoundation (from `.symlinks/plugins/camera_avfoundation/ios`)
- Flutter (from `Flutter`)
- onnxruntime (from `.symlinks/plugins/onnxruntime/ios`)
- path_provider_foundation (from `.symlinks/plugins/path_provider_foundation/darwin`)
- pytorch_lite (from `.symlinks/plugins/pytorch_lite/ios`)
dependencies:
flutter:
sdk: flutter
# image_picker: ^0.8.4+4
# The following adds the Cupertino Icons font to your application.
# Use with the CupertinoIcons class for iOS style icons.
image: ^4.2.0
pytorch_lite: ^4.2.5
camera: 0.10.6
syncfusion_flutter_gauges: ^26.1.42
loading_animation_widget: ^1.2.1
Is my pytorch_lite not latest or something problem?
Does it work on Android if yes it may be a pytorch iOS problem
I am currently carrying out performance tests with TF Lite and pytorch_lite in Flutter (I should be able to give more info in the future, if anyone is interested).
My question is: does pytorch_lite use XNNPack by default? It seems to me pytorch_lite is faster than tflite_flutter, but I surprisingly don't manage to enable XNNPack with tflite_flutter (I get an error).
If the answer is yes, is it possible to, eg. explicitly enable or disable XNNPack?
PS: Pytorch 2.1.0 has been released last week. Do you plan to update pytorch_lite and use the new version?