Open gaikwadrahul8 opened 4 days ago
This issue originally reported by @kubaraczkowski has been moved to this dedicated repository for LiteRT to enhance issue tracking and prioritization. To ensure continuity, we have created this new issue on your behalf.
We appreciate your understanding and look forward to your continued involvement.
Issue type
Build/Install
Have you reproduced the bug with TensorFlow Nightly?
No
Source
source
TensorFlow version
2.16.1
Custom code
No
OS platform and distribution
Docker
Mobile device
No response
Python version
3.11
Bazel version
No response
GCC/compiler version
No response
CUDA/cuDNN version
No response
GPU model and memory
No response
Current behavior?
Hi,
I'm following the steps to build TF Lite using docker (https://www.tensorflow.org/lite/android/lite_build#set_up_build_environment_using_docker) with TF release 2.16.1 (also tried 2.15). No modifications to the sources, Dockerfile or commands, only change is limiting the target architectures to arm64_v8a
The step of building aar for a given model fails by throwing C++20 extensions needed by the nnapi delegate. It seems that the rest of tooling is using gnu++17?
Standalone code to reproduce the issue
Relevant log output