Closed januarkai closed 2 years ago
What does your workspace file look like?
I have solved the previous error. But now I faced new error: Should I start new open issue for this? Here my WORKSPACE looklike: `workspace(name = "Torch-TensorRT")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
http_archive( name = "rules_python", sha256 = "778197e26c5fbeb07ac2a2c5ae405b30f6cb7ad1f5510ea6fdac03bded96cc6f", url = "https://github.com/bazelbuild/rules_python/releases/download/0.2.0/rules_python-0.2.0.tar.gz", )
load("@rules_python//python:pip.bzl", "pip_install")
http_archive( name = "rules_pkg", sha256 = "038f1caa773a7e35b3663865ffb003169c6a71dc995e39bf4815792f385d837d", urls = [ "https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz", "https://github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz", ], )
load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")
rules_pkg_dependencies()
git_repository( name = "googletest", commit = "703bd9caab50b139428cea1aaff9974ebee5742e", remote = "https://github.com/google/googletest", shallow_since = "1570114335 -0400", )
local_repository( name = "torch_tensorrt", path = "/opt/conda/lib/python3.8/site-packages/torch_tensorrt" )
new_local_repository( name = "cuda", build_file = "@//third_party/cuda:BUILD", path = "/usr/local/cuda-10.2/", )
new_local_repository( name = "cublas", build_file = "@//third_party/cublas:BUILD", path = "/usr", ) #############################################################################################################
#############################################################################################################
####################################################################################
####################################################################################
new_local_repository( name = "libtorch", path = "/home/nvidia/.local/lib/python3.6/site-packages/torch", build_file = "third_party/libtorch/BUILD" )
new_local_repository( name = "libtorch_pre_cxx11_abi", path = "/home/nvidia/.local/lib/python3.6/site-packages/torch", build_file = "third_party/libtorch/BUILD" )
new_local_repository( name = "cudnn", path = "/usr/", build_file = "@//third_party/cudnn/local:BUILD" )
new_local_repository( name = "tensorrt", path = "/usr/", build_file = "@//third_party/tensorrt/local:BUILD" )
##########################################################################
##########################################################################
pip_install( name = "pylinter_deps", requirements = "//tools/linter:requirements.txt", )`
This bug is okay. The issue is there are some breaking changes for jetpack 4.5 (specifically we use apis introduced in TRT 8.2) in preparation for our next release. I would say try checking out the 1.0 tag if you just need a working version. If you need master I can provide more information on how to backport.
I have tried all tags, and only v0.3.0 successfully installed C++ library. But I faced problem I tried to install Python API. It shows this kind of error:
trtorch/csrc/tensorrt_backend.h:13:15: error: ‘c10::IValue trtorch::backend::TensorRTBackend::preprocess(c10::IValue, c10::impl::GenericDict)’ marked ‘override’, but does not override c10::IValue preprocess(c10::IValue mod, c10::impl::GenericDict method_compile_spec) override; ^~~~~~~~~~ In file included from /home/nvidia/.local/lib/python3.6/site-packages/torch/include/c10/core/StorageImpl.h:6:0, from /home/nvidia/.local/lib/python3.6/site-packages/torch/include/c10/core/Storage.h:3, from /home/nvidia/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:12, from /home/nvidia/.local/lib/python3.6/site-packages/torch/include/ATen/Tensor.h:3, from /home/nvidia/.local/lib/python3.6/site-packages/torch/include/ATen/Context.h:4, from /home/nvidia/.local/lib/python3.6/site-packages/torch/include/ATen/ATen.h:9, from /home/nvidia/.local/lib/python3.6/site-packages/torch/include/torch/csrc/jit/ir/attributes.h:2, from /home/nvidia/.local/lib/python3.6/site-packages/torch/include/torch/csrc/jit/ir/ir.h:3, from /home/nvidia/.local/lib/python3.6/site-packages/torch/include/torch/csrc/jit/passes/lower_graph.h:3, from trtorch/csrc/tensorrt_backend.cpp:1:
So is it better for me to upgrade Jetpack to 4.6?. I am using AGX Xavier.
I want to do inference on Jetson from my model. My model have been converted to TensorRT using latest Torch-tensorRT Docker container in my Computer. But when I run inference using v0.3.0 in AGX Xavier I faced this error:
terminate called after throwing an instance of 'c10::Error' what(): __setstate__() Expected a value of type 'str' for argument '_1' but instead found type 'List[str]'. Position: 1 Declaration: __setstate__(__torch__.torch.classes.tensorrt.Engine _0, str _1) -> (NoneType _0) Exception raised from checkArg at bazel-out/aarch64-opt/bin/external/libtorch/_virtual_includes/ATen/ATen/core/function_schema_inl.h:184 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0xa0 (0x7f81d02508 in /home/nvidia/.local/lib/python3.6/site-packages/torch/lib/libc10.so)
Yes using latest Jetpack is preferable if possible
Thank you @narendasan
I try to install Torch-TensorRT and I faced this installation error.
/home/nvidia/Torch-TensorRT/cpp/bin/torchtrtc/BUILD:10:10: no such package '@libtorch//': The repository '@libtorch' could not be resolved and referenced by '//cpp/bin/torchtrtc:torchtrtc'
I am using Jetpack 4.5bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.5