casys-kaist / CoVA

Official code repository for "CoVA: Exploiting Compressed-Domain Analysis to Accelerate Video Analytics [USENIX ATC 22]"
https://www.usenix.org/conference/atc22/presentation/hwang
15 stars 2 forks source link

Problem with building TF 2.8 in docker image #1

Closed Nier4Ryu closed 2 years ago

Nier4Ryu commented 2 years ago

Hello! I'm currently trying to reproduce your work and I'm having trouble in building your work from the part where tensorflow 2.8 should be built in the image. The current problem is this line "RUN /usr/local/lib/bazel/bin/bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package"

It keeps giving me the following error :

35 5637.7 ERROR: /build/tf/tensorflow/tensorflow/compiler/tf2tensorrt/BUILD:43:11: Compiling tensorflow/compiler/tf2tensorrt/stub/nvinfer_plugin_stub.cc failed: (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -MD -MF bazel-out/k8-opt/bin/tensorflow/compiler/tf2tensorrt/_objs/tensorrt_stub/nvinfer_plugin_stub.pic.d ... (remaining 150 argument(s) skipped)

35 5637.7 tensorflow/compiler/tf2tensorrt/stub/nvinfer_plugin_stub.cc:66:2: error: #error This version of TensorRT is not supported.

What I have tried is

Adding code to clean bazel cache following [https://github.com/tensorflow/tensorflow/issues/55662] -> "RUN /usr/local/lib/bazel/bin/bazel clean --expunge"

As the above didn't worked, I'm currently trying to build gcc-7.3.1 as from the above link, it says that setting the correct gcc-version is important and as the current gcc version is gcc-7.5.0, but as I wasn't able to find any distributions, I'm trying to build it from source using the source from [https://src.fedoraproject.org/lookaside/extras/gcc/gcc-7.3.1-20180303.tar.bz2/sha512/3c65092ea40f401c7bb5c220079f367323b389668763a93962c321a98caa4d72487748c034903f5a6381d9edadc1bc848831f3bc5404db283a234cf2d7bb82f1/] but I have little hopes for this to sceed, so any help would be greatful

Even though this is building an docker-image, this info would not be that usefull but
My current plat form is Ubuntu20.04LTS on WSL2 Docker Engine 20.10.16

The total error log is the following => ERROR [tf-builder 6/7] RUN /usr/local/lib/bazel/bin/bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package 5642.0s

[tf-builder 6/7] RUN /usr/local/lib/bazel/bin/bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package:

35 0.364 Starting local Bazel server and connecting to it...

35 1.542 INFO: Options provided by the client:

35 1.542 Inherited 'common' options: --isatty=0 --terminal_columns=80

35 1.543 INFO: Reading rc options for 'build' from /build/tf/tensorflow/.bazelrc:

35 1.543 Inherited 'common' options: --experimental_repo_remote_exec

35 1.544 INFO: Reading rc options for 'build' from /build/tf/tensorflow/.bazelrc:

35 1.544 'build' options: --define framework_shared_object=true --java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --host_java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library

35 1.544 INFO: Reading rc options for 'build' from /build/tf/tensorflow/.tf_configure.bazelrc:

35 1.544 'build' options: --action_env PYTHON_BIN_PATH=/usr/bin/python3.7 --action_env PYTHON_LIB_PATH=/usr/local/lib/python3.7/dist-packages --python_path=/usr/bin/python3.7 --config=tensorrt --action_env CUDA_TOOLKIT_PATH=/usr/local/cuda-11.4 --action_env TF_CUDA_COMPUTE_CAPABILITIES=8.6 --action_env LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:/usr/lib/i386-linux-gnu:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 --action_env GCC_HOST_COMPILER_PATH=/usr/bin/x86_64-linux-gnu-gcc-7 --config=cuda

35 1.544 INFO: Reading rc options for 'build' from /build/tf/tensorflow/.bazelrc:

35 1.544 'build' options: --deleted_packages=tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/analysis,tensorflow/compiler/mlir/tfrt/tests/jit,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_tfrt,tensorflow/compiler/mlir/tfrt/tests/tf_to_corert,tensorflow/compiler/mlir/tfrt/tests/tf_to_tfrt_data,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/tfrt/common,tensorflow/core/tfrt/eager,tensorflow/core/tfrt/eager/backends/cpu,tensorflow/core/tfrt/eager/backends/gpu,tensorflow/core/tfrt/eager/core_runtime,tensorflow/core/tfrt/eager/cpp_tests/core_runtime,tensorflow/core/tfrt/fallback,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils

35 1.545 INFO: Found applicable config definition build:short_logs in file /build/tf/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING

35 1.548 INFO: Found applicable config definition build:v2 in file /build/tf/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1

35 1.548 INFO: Found applicable config definition build:tensorrt in file /build/tf/tensorflow/.bazelrc: --repo_env TF_NEED_TENSORRT=1

35 1.549 INFO: Found applicable config definition build:cuda in file /build/tf/tensorflow/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda

35 1.549 INFO: Found applicable config definition build:opt in file /build/tf/tensorflow/.tf_configure.bazelrc: --copt=-Wno-sign-compare --host_copt=-Wno-sign-compare

35 1.549 INFO: Found applicable config definition build:linux in file /build/tf/tensorflow/.bazelrc: --copt=-w --host_copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --config=dynamic_kernels --distinct_host_configuration=false --experimental_guard_against_concurrent_changes

35 1.550 INFO: Found applicable config definition build:dynamic_kernels in file /build/tf/tensorflow/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS

35 5.219 WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/tensorflow/runtime/archive/c3e082762b7664bbc7ffd2c39e86464928e27c0c.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found

35 66.45 DEBUG: /root/.cache/bazel/_bazel_root/7f8526f8c924056e6e66ae135a776306/external/bazel_tools/tools/cpp/lib_cc_configure.bzl:118:10:

35 66.45 Auto-Configuration Warning: 'TMP' environment variable is not set, using 'C:\Windows\Temp' as default

35 66.56 Loading: (1 packages loaded)

35 66.56 Loading: 1 packages loaded

35 66.88 Analyzing: target //tensorflow/tools/pip_package:build_pip_package (2 packages loaded, 0 targets configured)

35 76.76 Analyzing: target //tensorflow/tools/pip_package:build_pip_package (225 packages loaded, 3919 targets configured)

35 76.84 DEBUG: Rule 'io_bazel_rules_docker' indicated that a canonical reproducible form can be obtained by modifying arguments shallow_since = "1596824487 -0400"

35 76.84 DEBUG: Repository io_bazel_rules_docker instantiated at:

35 76.84 /build/tf/tensorflow/WORKSPACE:23:14: in

35 76.84 /build/tf/tensorflow/tensorflow/workspace0.bzl:108:34: in workspace

35 76.84 /root/.cache/bazel/_bazel_root/7f8526f8c924056e6e66ae135a776306/external/bazel_toolchains/repositories/repositories.bzl:35:23: in repositories

35 76.84 Repository rule git_repository defined at:

35 76.84 /root/.cache/bazel/_bazel_root/7f8526f8c924056e6e66ae135a776306/external/bazel_tools/tools/build_defs/repo/git.bzl:199:33: in

35 88.23 Analyzing: target //tensorflow/tools/pip_package:build_pip_package (244 packages loaded, 3919 targets configured)

35 96.99 WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/google/XNNPACK/archive/113092317754c7dea47bfb3cb49c4f59c3c1fa10.zip failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found

35 108.7 Analyzing: target //tensorflow/tools/pip_package:build_pip_package (249 packages loaded, 3981 targets configured)

35 118.6 INFO: Analyzed target //tensorflow/tools/pip_package:build_pip_package (450 packages loaded, 36216 targets configured).

35 118.6 INFO: Found 1 target...

35 118.7 [0 / 12] [Prepa] Creating source manifest for //tensorflow/python/debug/lib:grpc_tensorflow_server

35 136.3 [91 / 105] Compiling src/google/protobuf/map_field.cc; 1s local ... (6 actions, 5 running)

35 156.8 [158 / 187] Compiling src/google/protobuf/compiler/command_line_interface.cc; 1s local ... (6 actions, 5 running)

35 180.3 [367 / 392] Compiling src/google/protobuf/util/message_differencer.cc; 2s local ... (6 actions, 5 running)

35 207.1 [2,085 / 5,054] Compiling llvm/lib/Support/Triple.cpp; 1s local ... (6 actions, 5 running)

35 238.1 [2,266 / 5,197] Compiling mlir/tools/mlir-tblgen/OpDefinitionsGen.cpp; 5s local ... (6 actions, 5 running)

35 273.8 [2,703 / 5,562] Compiling llvm/utils/TableGen/DAGISelEmitter.cpp; 3s local ... (6 actions, 5 running)

35 315.1 [2,969 / 5,827] Compiling mlir/lib/IR/AsmPrinter.cpp; 2s local ... (6 actions, 5 running)

35 362.2 [3,734 / 6,605] Generating code from table: lib/Target/X86/X86.td @llvm-project//llvm:X86CommonTableGen__gen_fast_isel_genrule; 3s local ... (6 actions, 5 running)

35 416.6 [5,847 / 8,903] Compiling tensorflow/core/lib/io/inputstream_interface.cc; 1s local ... (6 actions, 5 running)

35 479.0 [6,639 / 9,901] Compiling tensorflow/stream_executor/cuda/cuda_blas.cc; 5s local ... (6 actions, 5 running)

35 553.1 [6,795 / 9,901] Compiling tensorflow/core/common_runtime/collective_param_resolver_local.cc; 7s local ... (6 actions, 5 running)

35 635.8 [6,941 / 9,901] Compiling tensorflow/core/util/batch_util.cc; 22s local ... (6 actions, 5 running)

35 731.0 [8,943 / 12,244] Compiling tensorflow/stream_executor/cuda/cuda_fft.cc; 3s local ... (6 actions, 5 running)

35 840.8 [9,305 / 12,531] Compiling mlir/lib/IR/Builders.cpp; 2s local ... (6 actions, 5 running)

35 967.6 [10,244 / 13,400] Compiling tensorflow/core/framework/memory_types.cc; 3s local ... (6 actions, 5 running)

35 1112.8 [10,664 / 13,680] Compiling tensorflow/core/common_runtime/collective_rma_local.cc; 3s local ... (6 actions, 5 running)

35 1279.8 [11,967 / 15,334] Compiling tensorflow/compiler/mlir/tools/kernel_gen/kernel_creator.cc; 35s local ... (6 actions, 5 running)

35 1472.0 [12,304 / 15,700] Compiling llvm/lib/DebugInfo/DWARF/DWARFContext.cpp; 5s local ... (6 actions, 5 running)

35 1693.2 [12,633 / 15,953] Compiling llvm/lib/Target/ARM/MVEVPTBlockPass.cpp; 6s local ... (6 actions, 5 running)

35 1947.5 [13,154 / 16,334] Compiling tensorflow/compiler/mlir/tensorflow/ir/tf_ops_a_m.cc; 86s local ... (6 actions running)

35 2240.0 [13,518 / 16,612] Compiling tensorflow/compiler/mlir/xla/transforms/legalize_tf.cc; 9s local ... (6 actions running)

35 2575.9 [14,077 / 17,262] Compiling llvm/lib/CodeGen/PeepholeOptimizer.cpp; 6s local ... (6 actions running)

35 2962.0 [14,517 / 17,546] Compiling tensorflow/compiler/mlir/tensorflow/ir/tf_ops.cc; 22s local ... (6 actions running)

35 3406.3 [15,485 / 18,401] Compiling tensorflow/compiler/mlir/tensorflow/ir/tf_ops.cc; 466s local ... (6 actions running)

35 3917.0 [16,268 / 19,094] Compiling llvm/lib/CodeGen/MachineBlockFrequencyInfo.cpp; 4s local ... (6 actions running)

35 4504.3 [17,614 / 20,441] Compiling tensorflow/compiler/xla/service/topk_rewriter.cc; 10s local ... (6 actions running)

35 5180.2 [18,765 / 21,377] Compiling tensorflow/core/kernels/tile_functor_cpu_int16.cc; 6s local ... (6 actions running)

35 5637.7 ERROR: /build/tf/tensorflow/tensorflow/compiler/tf2tensorrt/BUILD:43:11: Compiling tensorflow/compiler/tf2tensorrt/stub/nvinfer_plugin_stub.cc failed: (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -MD -MF bazel-out/k8-opt/bin/tensorflow/compiler/tf2tensorrt/_objs/tensorrt_stub/nvinfer_plugin_stub.pic.d ... (remaining 150 argument(s) skipped)

35 5637.7 tensorflow/compiler/tf2tensorrt/stub/nvinfer_plugin_stub.cc:66:2: error: #error This version of TensorRT is not supported.

35 5637.7 #error This version of TensorRT is not supported.

35 5637.7 ^~~~~

35 5639.8 Target //tensorflow/tools/pip_package:build_pip_package failed to build

35 5639.8 Use --verbose_failures to see the command lines of failed build steps.

35 5640.5 INFO: Elapsed time: 5639.423s, Critical Path: 654.25s

35 5640.5 INFO: 19859 processes: 8002 internal, 11857 local.

35 5640.5 FAILED: Build did NOT complete successfully

35 5640.5 FAILED: Build did NOT complete successfully


executor failed running [/bin/sh -c /usr/local/lib/bazel/bin/bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package]: exit code: 1

jinuhwang commented 2 years ago

Hi @Nier4Ryu, my best guess is that you have a different TensorRT version. Could you check the TensorRT version and your CUDA version on your host?

Nier4Ryu commented 2 years ago

On my local windows11 host I currently have cuda 11.7 installed and no TensorRT installed would I have to uninstall this?

Nier4Ryu commented 2 years ago

I just built the docker image with out installing TF, the current image shows

nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Sun_Aug_15_21:14:11_PDT_2021 Cuda compilation tools, release 11.4, V11.4.120 Build cuda_11.4.r11.4/compiler.30300941_0

dpkg -l | grep nvinfer ii libnvinfer-bin 8.4.2-1+cuda11.6 amd64 TensorRT binaries ii libnvinfer-dev 8.4.2-1+cuda11.6 amd64 TensorRT development libraries and headers ii libnvinfer-plugin-dev 8.4.2-1+cuda11.6 amd64 TensorRT plugin libraries ii libnvinfer-plugin8 8.4.2-1+cuda11.6 amd64 TensorRT plugin libraries ii libnvinfer-samples 8.4.2-1+cuda11.6 all TensorRT samples ii libnvinfer8 8.4.2-1+cuda11.6 amd64 TensorRT runtime libraries ii python3-libnvinfer 8.4.2-1+cuda11.6 amd64 Python 3 bindings for TensorRT ii python3-libnvinfer-dev 8.4.2-1+cuda11.6 amd64 Python 3 development package for TensorRT

I got the tensorrt package for TensorRT 8.2 GA Update 3 for x86_64 Architecture - TensorRT 8.2 GA Update 3 for Ubuntu 18.04 and CUDA 11.0, 11.1, 11.2, 11.3, 11.4 and 11.5 DEB local repo Package As this file had the correct name of the required tensorrt deb file

could there be a problem from here?

Nier4Ryu commented 2 years ago

I resolved the tensorrt installation by changing the install_trt.sh from

dpkg -i nv-tensorrt-repo-${os}-${tag}_1-1_amd64.deb
apt-key add /var/nv-tensorrt-repo-${os}-${tag}/7fa2af80.pub

apt-get update
apt-get install -y tensorrt
apt-get install -y python3-libnvinfer-dev uff-converter-tf onnx-graphsurgeon

to

dpkg -i nv-tensorrt-repo-ubuntu1804-cuda11.4-trt8.2.4.2-ga-20220324_1-1_amd64.deb
apt-key add /var/nv-tensorrt-repo-ubuntu1804-cuda11.4-trt8.2.4.2-ga-20220324/*.pub

apt-get update
apt-get install libnvinfer8=8.2.4-1+cuda11.4
apt-get install libnvinfer-plugin8=8.2.4-1+cuda11.4
apt-get install libnvparsers8=8.2.4-1+cuda11.4
apt-get install libnvonnxparsers8=8.2.4-1+cuda11.4
apt-get install libnvinfer-bin=8.2.4-1+cuda11.4
apt-get install libnvinfer-dev=8.2.4-1+cuda11.4
apt-get install libnvinfer-plugin-dev=8.2.4-1+cuda11.4
apt-get install libnvparsers-dev=8.2.4-1+cuda11.4
apt-get install libnvonnxparsers-dev=8.2.4-1+cuda11.4
apt-get install libnvinfer-samples=8.2.4-1+cuda11.4
apt-get install libnvinfer-doc=8.2.4-1+cuda11.4

apt-get install tensorrt=8.2.4.2-1+cuda11.4

apt-get install python3-libnvinfer=8.2.4-1+cuda11.4
apt-get install python3-libnvinfer-dev=8.2.4-1+cuda11.4

apt-get install graphsurgeon-tf=8.2.4-1+cuda11.4
apt-get install uff-converter-tf=8.2.4-1+cuda11.4
apt-get install onnx-graphsurgeon=8.2.4-1+cuda11.4

so now

dpkg -l | grep nvinfer
ii  libnvinfer-bin                                              8.2.4-1+cuda11.4                    amd64        TensorRT binaries
ii  libnvinfer-dev                                              8.2.4-1+cuda11.4                    amd64        TensorRT development libraries and headers
ii  libnvinfer-doc                                              8.2.4-1+cuda11.4                    all          TensorRT documentation
ii  libnvinfer-plugin-dev                                       8.2.4-1+cuda11.4                    amd64        TensorRT plugin libraries
ii  libnvinfer-plugin8                                          8.2.4-1+cuda11.4                    amd64        TensorRT plugin libraries
ii  libnvinfer-samples                                          8.2.4-1+cuda11.4                    all          TensorRT samples
ii  libnvinfer8                                                 8.2.4-1+cuda11.4                    amd64        TensorRT runtime libraries
ii  python3-libnvinfer                                          8.2.4-1+cuda11.4                    amd64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                                      8.2.4-1+cuda11.4                    amd64        Python 3 development package for TensorRT

so I guess now I have properly installed tensorrt 8.2.4, but still I have this issue of

ERROR: /build/tf/tensorflow/tensorflow/compiler/tf2tensorrt/BUILD:43:11: Compiling tensorflow/compiler/tf2tensorrt/stub/nvinfer_stub.cc failed: (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -MD -MF bazel-out/k8-opt/bin/tensorflow/compiler/tf2tensorrt/_objs/tensorrt_stub/nvinfer_stub.pic.d ... (remaining 150 argument(s) skipped)
tensorflow/compiler/tf2tensorrt/stub/nvinfer_stub.cc:66:2: error: #error This version of TensorRT is not supported.
error This version of TensorRT is not supported.

what could be the problem?

jinuhwang commented 2 years ago

To me, it seems to be an issue with the Tensorflow and TensorRT version, if you weren't able to get the TensorRT version that I used as in the install_trt.sh, you probably have to change the Tensorflow to matching version as well.

Nier4Ryu commented 2 years ago

After deleting the current image and reinstalling from scratch did the job!

changes made are the following in install_trt.sh from

dpkg -i nv-tensorrt-repo-${os}-${tag}_1-1_amd64.deb
apt-key add /var/nv-tensorrt-repo-${os}-${tag}/7fa2af80.pub

apt-get update
apt-get install -y tensorrt
apt-get install -y python3-libnvinfer-dev uff-converter-tf onnx-graphsurgeon

to

dpkg -i nv-tensorrt-repo-ubuntu1804-cuda11.4-trt8.2.4.2-ga-20220324_1-1_amd64.deb
apt-key add /var/nv-tensorrt-repo-ubuntu1804-cuda11.4-trt8.2.4.2-ga-20220324/*.pub

apt-get update
apt-get install -y libnvinfer8=8.2.4-1+cuda11.4 -f
apt-get install -y libnvinfer-plugin8=8.2.4-1+cuda11.4 -f
apt-get install -y libnvparsers8=8.2.4-1+cuda11.4 -f
apt-get install -y libnvonnxparsers8=8.2.4-1+cuda11.4 -f
apt-get install -y libnvinfer-bin=8.2.4-1+cuda11.4 -f
apt-get install -y libnvinfer-dev=8.2.4-1+cuda11.4 -f
apt-get install -y libnvinfer-plugin-dev=8.2.4-1+cuda11.4 -f
apt-get install -y libnvparsers-dev=8.2.4-1+cuda11.4 -f
apt-get install -y libnvonnxparsers-dev=8.2.4-1+cuda11.4 -f
apt-get install -y libnvinfer-samples=8.2.4-1+cuda11.4 -f
apt-get install -y libnvinfer-doc=8.2.4-1+cuda11.4 -f

apt-get install -y tensorrt=8.2.4.2-1+cuda11.4 -f

apt-get install -y python3-libnvinfer=8.2.4-1+cuda11.4 -f
apt-get install -y python3-libnvinfer-dev=8.2.4-1+cuda11.4 -f

apt-get install -y graphsurgeon-tf=8.2.4-1+cuda11.4 -f
apt-get install -y uff-converter-tf=8.2.4-1+cuda11.4 -f
apt-get install -y onnx-graphsurgeon=8.2.4-1+cuda11.4 -f

additionaly in tf_configure.bazelrc

build --action_env TF_CUDA_COMPUTE_CAPABILITIES="8.6"

should change to

build --action_env TF_CUDA_COMPUTE_CAPABILITIES=<Compute Capability of Your GPU -> Can be found from CUDA Wikipedia>