google-ai-edge / LiteRT

LiteRT is the new name for TensorFlow Lite (TFLite). While the name is new, it's still the same trusted, high-performance runtime for on-device AI, now with an expanded vision.
https://ai.google.dev/edge/litert
Apache License 2.0
169 stars 14 forks source link

[TfLite] unresolved TfLiteGPUDelegateV2Create with Visual Studio #161

Open gaikwadrahul8 opened 3 days ago

gaikwadrahul8 commented 3 days ago

Issue type

Bug

Have you reproduced the bug with TensorFlow Nightly?

Yes

Source

source

TensorFlow version

2.13

Custom code

No

OS platform and distribution

Windows 10 Pro

Mobile device

No response

Python version

3.11.4

Bazel version

No response

GCC/compiler version

Microsoft Visual Studio 2022 C++ compiler

CUDA/cuDNN version

No response

GPU model and memory

No response

Current behavior?

I built TfLite with -DTFLITE_ENABLE_GPU=ON. I tried to test TfLite with GPU, but the minimal-working C++ example for the GPU from https://www.tensorflow.org/lite/android/delegates/gpu_native does not work under Windows 10 with Visual Studio 2022. I receive the linker error error LNK2001: unresolved external symbol __imp_TfLiteGpuDelegateV2Create. I ran dumpbin on tensorflow-lite.lib and it says a static fuction TfLiteGPUDelegateV2Create does exist. I tried a different Windows 10 machine with Visual Studio 2019, but I receive the same linker error. I got the example working under Ubuntu Linux 23.04 with gcc12 using the same build commands (except from replacing the Windows specifics with Linux specifics of course).

Standalone code to reproduce the issue

Build commands (in Command Prompt):
git clone https://github.com/tensorflow/tensorflow tensorflow_src
mkdir tflite_release_x64
cd tflite_release_x64
cmake -G "Visual Studio 17" -A x64 -DTFLITE_ENABLE_GPU=ON ..\tensorflow_src\tensorflow\lite
cmake --build . -j 16 --config Release

C++ code (https://www.tensorflow.org/lite/android/delegates/gpu_native):
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/delegates/gpu/delegate.h"
#include <iostream>

using namespace tflite;

int main() {
    // Set up interpreter.
    auto model = FlatBufferModel::BuildFromFile("C:/Users/bartp/source/lite-model_deeplabv3_1_metadata_2.tflite");
    if (!model) return false;
    ops::builtin::BuiltinOpResolver op_resolver;
    std::unique_ptr<Interpreter> interpreter;
    InterpreterBuilder(*model, op_resolver)(&interpreter);

    auto* delegate = TfLiteGpuDelegateV2Create(/*default options=*/nullptr);

    std::cout << "Done\n";
    return 0;
}

Relevant log output

1>------ Build started: Project: MweTfLite2.13Gpu, Configuration: Release x64 ------
1>Main.cpp
1>Main.obj : error LNK2001: unresolved external symbol __imp_TfLiteGpuDelegateV2Create
1>C:\Users\bartp\source\MweTfLite2.13Gpu\x64\Release\MweTfLite2.13Gpu.exe : fatal error LNK1120: 1 unresolved externals
gaikwadrahul8 commented 3 days ago

This issue originally reported by @misterBart has been moved to this dedicated repository for LiteRT to enhance issue tracking and prioritization. To ensure continuity, we have created this new issue on your behalf.

We appreciate your understanding and look forward to your continued involvement.