am15h / tflite_flutter_plugin

TensorFlow Lite Flutter Plugin
https://pub.dev/packages/tflite_flutter
Apache License 2.0
510 stars 355 forks source link

No operations will run on the GPU #131

Open WillianSalceda opened 3 years ago

WillianSalceda commented 3 years ago

When running object detection example (doesn't matter if it is a custom model or the default one from the example), with latest binaries (install.bat updated 15 minutes ago) with -d option, I get the following message:

E/tflite  ( 2267): Following operations are not supported by GPU delegate:
E/tflite  ( 2267): ADD: OP is supported, but tensor type isn't matched!
E/tflite  ( 2267): CONCATENATION: OP is supported, but tensor type isn't matched!
E/tflite  ( 2267): CONV_2D: OP is supported, but tensor type isn't matched!
E/tflite  ( 2267): CUSTOM TFLite_Detection_PostProcess: TFLite_Detection_PostProcess
E/tflite  ( 2267): DEPTHWISE_CONV_2D: OP is supported, but tensor type isn't matched!
E/tflite  ( 2267): DEQUANTIZE: Operation is not supported.
E/tflite  ( 2267): LOGISTIC: OP is supported, but tensor type isn't matched!
E/tflite  ( 2267): MAX_POOL_2D: OP is supported, but tensor type isn't matched!
E/tflite  ( 2267): QUANTIZE: Operation is not supported.
E/tflite  ( 2267): RESHAPE: OP is supported, but tensor type isn't matched!
E/tflite  ( 2267): RESIZE_NEAREST_NEIGHBOR: OP is supported, but tensor type isn't matched!
E/tflite  ( 2267): No operations will run on the GPU, and all 267 operations will run on the CPU.
I/tflite  ( 2267): Created 0 GPU delegate kernels.

on loadModel function I have:

try {
      final gpuDelegateV2 = GpuDelegateV2(
          options: GpuDelegateOptionsV2(
        false,
        TfLiteGpuInferenceUsage.preferenceSustainSpeed,
        TfLiteGpuInferencePriority.auto,
        TfLiteGpuInferencePriority.auto,
        TfLiteGpuInferencePriority.auto,
      ));

      var interpreterOptions = InterpreterOptions()
        //..useNnApiForAndroid = true;
        ..addDelegate(gpuDelegateV2);

      _interpreter = interpreter ??
          await Interpreter.fromAsset(MODEL_FILE_NAME,
              options: interpreterOptions);

      var outputTensors = _interpreter.getOutputTensors();
      _outputShapes = [];
      _outputTypes = [];
      outputTensors.forEach((tensor) {
        _outputShapes.add(tensor.shape);
        _outputTypes.add(tensor.type);
      });
    } catch (e) {
      print("Error while creating interpreter: $e");
    }

By the way, I've already changed the GPU delegate options to every combination that can be done, and I'm running it both on emulator and physical devices.

edit: output tensor type is: TfLiteType.float32

WillianSalceda commented 3 years ago

By using Netron to visualize the model, I understood what my problem was, the input tensor should be float32, not the output, as I thought. But the problem now is, when using GPU delegate with default settings, the app crash instantly after being built. I followed this issue to make it run (using image classification examples).

E/libc    (29521): Access denied finding property "camera.aux.packagelist"
F/libc    (29521): Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x78 in tid 29653 (1.ui), pid 29521 (bject_detection)
*** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
Build fingerprint: 'Xiaomi/dipper/dipper:10/QKQ1.190828.002/V12.0.1.0.QEAMIXM:user/release-keys'
Revision: '0'
ABI: 'arm64'
Timestamp: 2021-06-22 18:44:53-0300
pid: 29521, tid: 29653, name: 1.ui  >>> com.example.object_detection <<<
uid: 10471
signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x78
Cause: null pointer dereference
    x0  0000000000000000  x1  000000763b2c3c28  x2  ffffffffffffffd0  x3  000000763b2c4ce0
    x4  000000763b2c5fa0  x5  000000763b2c5fd0  x6  0000000000000000  x7  0000000000000000
    x8  0000000000000084  x9  0000000000000001  x10 0000000000000001  x11 0000000000000014
    x12 0000000000000098  x13 0000000000000000  x14 0000000000000000  x15 000000763b2c4d10
    x16 00000075ce2d80d0  x17 00000076bcafe5c0  x18 00000075cfe78000  x19 0000007623374800
    x20 00000075ce240050  x21 0000000000000000  x22 00000075ce3a5f48  x23 0000000000000000
    x24 00000076233cdc00  x25 000000762b4f0880  x26 000000762b4f0880  x27 000000762b4f0d10
    x28 00000075d4bd0a10  x29 000000763b2c5ff0
    sp  000000763b2c3ae0  lr  00000075ce002258  pc  00000075ce002334
backtrace:
      #00 pc 000000000180c334  /data/app/com.example.object_detection-0SfcBTnSRorqn2jPFjcMiw==/lib/arm64/libflutter.so (BuildId: 792a3be030ad1b2b80a60088dac3531791aa2054)
      #01 pc 000000000180c254  /data/app/com.example.object_detection-0SfcBTnSRorqn2jPFjcMiw==/lib/arm64/libflutter.so (BuildId: 792a3be030ad1b2b80a60088dac3531791aa2054)
Lost connection to device.
am15h commented 3 years ago

@WillianSalceda, Please try the latest tflite_flutter: v0.9.0 with some improvements in delegate support.

With v0.9.0, Creating GpuDelegateV2 without any parameters should work fine too.

final gpuDelegateV2 = GpuDelegateV2();

WillianSalceda commented 3 years ago

@WillianSalceda, Please try the latest tflite_flutter: v0.9.0 with some improvements in delegate support.

With v0.9.0, Creating GpuDelegateV2 without any parameters should work fine too.

final gpuDelegateV2 = GpuDelegateV2();

Got this error when running pub get:


So, because object_detection depends on both tflite_flutter ^0.9.0 and tflite_flutter_helper ^0.2.0, version solving failed.
pub get failed (1; So, because object_detection depends on both tflite_flutter ^0.9.0 and tflite_flutter_helper ^0.2.0, version solving failed.)```
trantuanngoc commented 2 years ago

@WillianSalceda did you have any solution?