abdelaziz-mahdy / pytorch_lite

flutter package to help run pytorch lite models classification and YoloV5 and YoloV8.
MIT License
51 stars 22 forks source link

XNNPack support and performance comparison with TF Lite for Flutter #53

Open andynewman10 opened 11 months ago

andynewman10 commented 11 months ago

I am currently carrying out performance tests with TF Lite and pytorch_lite in Flutter (I should be able to give more info in the future, if anyone is interested).

My question is: does pytorch_lite use XNNPack by default? It seems to me pytorch_lite is faster than tflite_flutter, but I surprisingly don't manage to enable XNNPack with tflite_flutter (I get an error).

If the answer is yes, is it possible to, eg. explicitly enable or disable XNNPack?

PS: Pytorch 2.1.0 has been released last week. Do you plan to update pytorch_lite and use the new version?

abdelaziz-mahdy commented 11 months ago

Well pytorch lite as far as I know doesn't use GPU,

And pytorch mobile didn't release 2.1 version as far as I know if they did I don't mind updating to it

Btw tflite should be faster if GPU is enabled

abdelaziz-mahdy commented 11 months ago

Yes 2.1 was released thanks for letting me know

https://central.sonatype.com/artifact/org.pytorch/pytorch_android_lite

Will try to update to it

andynewman10 commented 11 months ago

You're welcome.

Yes this would be great if you updated.

TF Lite GPU support is way too limited. The GpuDelegate is very severely limited and does not work with most models. The NNAPI delegate, somehow also supporting GPUs (in addition to TPUs and DSPs - if I understood correctly), is supposed to work with more models but I can't get it working. It's a real PITA.

But I am not talking about GPU support: XNNPack is pure CPU. It is written by Google and accelerates operations on the CPU.

Do you know if it is supported and enabled on Pytorch? From what I can see, it is supported, but is it really enabled by default? See:

https://pytorch.org/mobile/home/

It's important to know because it can, on some devices, have a significant performance boost, not to be taken lightly.

abdelaziz-mahdy commented 11 months ago

Yes I saw XNNPack does exist on pytorch

But the model should be changed to allow it

And if model is exported with it it should work automatically

andynewman10 commented 11 months ago

But the model should be changed to allow it

Can you tell me more about this?

andynewman10 commented 11 months ago

https://mvnrepository.com/artifact/org.pytorch

Maven and Gradle artifacts updated 9 days ago.

abdelaziz-mahdy commented 11 months ago

But the model should be changed to allow it

Can you tell me more about this?

optimize_for_mobile

This does it for you from what I found

abdelaziz-mahdy commented 11 months ago

for ios latest one is 1.13.0.1 https://libraries.io/cocoapods/LibTorch-Lite/1.13.0.1

just to keep track of the versions

abdelaziz-mahdy commented 11 months ago

updated to latest pytorch packages in 4.2.2

integration tests are working so everything should be the same so i did increase the patch version

andynewman10 commented 10 months ago

Somebody is mentionning the availability of LibTorch-Lite 2.1.0 here:

https://github.com/pytorch/pytorch/issues/102833#issuecomment-1820125165

I am wondering: why isn't this version advertised here https://libraries.io/search?q=LibTorch-Lite ? (I am not extremely familiar with iOS developer sites etc.) Is this worth updating?

abdelaziz-mahdy commented 10 months ago

Hello, for some reason I failed to make libtorch lite work but libtorch is working and I can't find libtorch v2 pod

So it's not updated and from my testing updating the android one didn't affect performance for better or worse, so as long it's not needed I don't think upgrading is important

If anyone needs it to be updated let me know

cyrillkuettel commented 10 months ago

Ah, the struggles of Libtorch-Lite versioning. @andynewman10 I was wondering as well why Libtorch 2.1.0 was not advertised. I asked this in discuss.pytorch.org as well. The 2.1.0 binaries clearly exist: CocoaPods/LibTorch-Lite/2.1.0/ (On Android I was able to run Liborch-Lite 2.1.0 without problems.)

But on iOS, I get this strange crash on startup. (https://github.com/pytorch/pytorch/issues/102833) I spent hours trying to find out why this happened, without success.

But the model should be changed to allow it

What he means is if you train / export a model with (python) pytorch 1.13 for example, and then try to run inference on mobile with let's say pytorch 1.10, you'll probably get an error. (You have to use 1.13 on mobile as well)

andynewman10 commented 10 months ago

But on iOS, I get this strange crash on startup. (pytorch/pytorch#102833) I spent hours trying to find out why this happened, without success.

@cyrillkuettel You only get a crash on iOS 12, right? Can you confirm things run fine on iOS 13 or later?

cyrillkuettel commented 10 months ago

Can't know for sure. The iPhone I use for testing (iPhone 6) only supports up until iOS 12.

andynewman10 commented 10 months ago

Have you tried your code with the iOS simulator? I would try it with iOS 12, and with a later iOS version.

cyrillkuettel commented 10 months ago

I tried running on simulator. Still returns error, but different one.

flutter create --template=plugin_ffi --platforms=android,ios libtorch_test_version 

added dependencies (in ios/libtorch_test_version.podspec):

Click me for more ``` Pod::Spec.new do |s| s.name = 'libtorch_test_version' s.version = '0.0.1' s.summary = 'A new Flutter FFI plugin project.' s.description = <<-DESC A new Flutter FFI plugin project. DESC s.homepage = 'http://example.com' s.license = { :file => '../LICENSE' } s.author = { 'Your Company' => 'email@example.com' } s.static_framework = true s.public_header_files = 'Classes/**/*.h' s.source = { :path => '.' } s.source_files = 'Classes/**/*' s.ios.deployment_target = '12.0' s.dependency 'Flutter' s.dependency 'LibTorch-Lite', '~>2.1.0' s.dependency 'OpenCV', '4.3.0' s.platform = :ios, '12.0' # Flutter.framework does not contain a i386 slice. s.pod_target_xcconfig = { 'DEFINES_MODULE' => 'YES', 'EXCLUDED_ARCHS[sdk=iphonesimulator*]' => 'i386', 'HEADER_SEARCH_PATHS' => '$(inherited) "${PODS_ROOT}/LibTorch-Lite/install/include"' } s.swift_version = '5.0' # Needed for Libtorch-Lite 2.1.0 s.xcconfig = { "CLANG_CXX_LANGUAGE_STANDARD" => "c++17", "CLANG_CXX_LIBRARY" => "libc++" } end ```
cd example/ios
pod install

open simuulator

open -a simulator

Run on simulator:

 flutter run -d 8648E37B-49FB-476F-9B52-041F5F4D4FD

Eventually this returns this.

Invalid argument(s): Failed to load dynamic library 'libtorch_test_version.framework/libtorch_test_version': dlopen(libtorch_test_version.framework/libtorch_test_version, 0x0001): tried: '/Library/Developer/CoreSimulator/Volumes/iOS_21A5303d/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 17.0.simruntime/Contents/Resources/RuntimeRootlibtorch_test_version.framework/libtorch_test_version' (no such file), '/Library/Developer/CoreSimulator/Volumes/iOS_21A5303d/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 17.0.simruntime/Contents/Resources/RuntimeRoot/usr/lib/swift/libtorch_test_version.framework/libtorch_test_version' (no such file), 

Honestly this is beyond frustrating. At this point I stopped, I don't care anymore...

andynewman10 commented 10 months ago

@cyrillkuettel You must make sure that, in Xcode, Strip symbols is not set to All symbols, just Non-global symbols. This is critical and might explain dlopen failures.

andynewman10 commented 10 months ago

Mind you, in debug builds this should not be an issue.

Have you tried the code on iOS 13+? Say, iOS 15?

cyrillkuettel commented 10 months ago

Yes i have set it to Non-global symbols, forgot to mention this. Still same "failed to lookup symobol"

cyrillkuettel commented 10 months ago

Screenshot from Simulator: simulator_screenshot_A60A5CFC-BFEC-49C9-AA60-37AA8EC5F095

andynewman10 commented 10 months ago

To be clear

cyrillkuettel commented 10 months ago
andynewman10 commented 9 months ago

I understand your frustration. Unfortunately, I'm not very good in iOS programming, so I cannot help you with this 'podspec debugging' issue. It looks like a file is not found, but the cause is unknown.

Do you know what the benefits of using 2.1.0 are over 1.13.x? Just curious.

cyrillkuettel commented 9 months ago

I would not bother with 2.1.0 unless you need it for a very specific purpose.

Potential benefits:

luvwinnie commented 2 months ago

How can I use LibTorch version2? I got a ViT model which use Attention seems like LibTorch 1.3 doesn't have the operation with this errors.

libc++abi: terminating due to uncaught exception of type torch::jit::ErrorReport: 
Unknown builtin op: aten::scaled_dot_product_attention.
Here are some suggestions: 
    aten::_scaled_dot_product_attention

The original call is:
 File "code/__torch__/timm/models/vision_transformer/___torch_mangle_1064.py", line 32
  _6 = (q_norm).forward()
  _7 = (k_norm).forward()
  x = torch.scaled_dot_product_attention(q, k, v)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
  input = torch.reshape(torch.transpose(x, 1, 2), [_0, _2, _4])
  _8 = (proj_drop).forward((proj).forward(input, ), )

Message from debugger: killed
abdelaziz-mahdy commented 2 months ago

How can I use LibTorch version2? I got a ViT model which use Attention seems like LibTorch 1.3 doesn't have the operation with this errors.

libc++abi: terminating due to uncaught exception of type torch::jit::ErrorReport: 
Unknown builtin op: aten::scaled_dot_product_attention.
Here are some suggestions: 
  aten::_scaled_dot_product_attention

The original call is:
 File "code/__torch__/timm/models/vision_transformer/___torch_mangle_1064.py", line 32
  _6 = (q_norm).forward()
  _7 = (k_norm).forward()
  x = torch.scaled_dot_product_attention(q, k, v)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
  input = torch.reshape(torch.transpose(x, 1, 2), [_0, _2, _4])
  _8 = (proj_drop).forward((proj).forward(input, ), )

Message from debugger: killed

An update to libtorch is needed so I will look into it when I have time

edit: @luvwinnie latest version of pytorch is used https://github.com/abdelaziz-mahdy/pytorch_lite/blob/3ab7bc081c6dc19b0947f34f815974b9682e2d85/android/build.gradle#L64

same for ios https://github.com/CocoaPods/Specs/tree/master/Specs/1/3/c/LibTorch

luvwinnie commented 2 months ago

@abdelaziz-mahdy It seems like my environment using Libtorch(1.13.0.1) with iOS?

PODS:
  - camera_avfoundation (0.0.1):
    - Flutter
  - Flutter (1.0.0)
  - LibTorch (1.13.0.1):
    - LibTorch/Core (= 1.13.0.1)
  - LibTorch/Core (1.13.0.1):
    - LibTorch/Torch
  - LibTorch/Torch (1.13.0.1)
  - onnxruntime (0.0.1):
    - Flutter
    - onnxruntime-objc (= 1.15.1)
  - onnxruntime-c (1.15.1)
  - onnxruntime-objc (1.15.1):
    - onnxruntime-objc/Core (= 1.15.1)
  - onnxruntime-objc/Core (1.15.1):
    - onnxruntime-c (= 1.15.1)
  - path_provider_foundation (0.0.1):
    - Flutter
    - FlutterMacOS
  - pytorch_lite (0.0.1):
    - Flutter
    - LibTorch (~> 1.13.0.1)

DEPENDENCIES:
  - camera_avfoundation (from `.symlinks/plugins/camera_avfoundation/ios`)
  - Flutter (from `Flutter`)
  - onnxruntime (from `.symlinks/plugins/onnxruntime/ios`)
  - path_provider_foundation (from `.symlinks/plugins/path_provider_foundation/darwin`)
  - pytorch_lite (from `.symlinks/plugins/pytorch_lite/ios`)
dependencies:
  flutter:
    sdk: flutter
  # image_picker: ^0.8.4+4

  # The following adds the Cupertino Icons font to your application.
  # Use with the CupertinoIcons class for iOS style icons.
  image: ^4.2.0
  pytorch_lite: ^4.2.5
  camera: 0.10.6
  syncfusion_flutter_gauges: ^26.1.42
  loading_animation_widget: ^1.2.1

Is my pytorch_lite not latest or something problem?

abdelaziz-mahdy commented 2 months ago

Does it work on Android if yes it may be a pytorch iOS problem