Open liuhao-97 opened 3 weeks ago
Yes, as long as the pytorch installed in your machine, linked to nnstreamer, supports NVidia GPU, you can enforce using GPU by adding a property to tensor_filter, accelerator=true:gpu
.
Check the nnstreamer.ini file you are using, too:
...
[pytorch]
enable_use_gpu=TRUE
And... tf-lite these days support GPU in general. You can enable GPU delegation with tflite.
Thanks for your response! When I add accelerator=true:gpu for tensor_filter, it gives me this: I think this happens because the different version of pytorch. I check the pytorch version of my Jetson AGX is 1.11.0, and the nnstreamer-pytorch support pytorch 1.10.2 if I am correct. May I ask how to let nnstreamer-pytorch support pytorch 1.11.0?
Thanks!
` ** Message: 16:43:59.641: gpu = 1, accl = gpu
(gst-launch-1.0:2732): CRITICAL : 16:43:59.669: Exception while loading the model: PyTorch is not linked with support for cuda devices
Exception raised from getDeviceGuardImpl at /build/pytorch-53jnGq/pytorch-1.10.2/c10/core/impl/DeviceGuardImplInterface.h:318 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::cxx11::basic_string<char, std::char_traits
The error message says what's wrong:
CRITICAL : 16:43:59.669: Exception while loading the model: PyTorch is not linked with support for cuda devices
Your pytorch is not built for CUDA. You need to install pytorch that is CUDA enabled.
Thanks for your response!
I have installed pytorch 1.11.0 cuda version on my Jetson AGX orin. I have checked the torch.cuda.is_available() and it gives true. But I think nnstreamer-pytorch only support pytorch 1.10.2.
Besides, I also double check the libtorch by building an example-app.cpp file as following, and it works fine.
I check the path /lib after I install nnstreamer-pytorch and there only exist libtorch.so, libtorch_cpu.so, libtorch_global_deps.so, libtorch_python.so, and there is no libtorch_gpu.so.
I link the libtorch_cpu.so of /usr/lib/nnstreamer/filters/libnnstreamer_filter_pytorch.so to /usr/local/lib/python3.8/dist-packages/torch/lib/libtorch_cpu.so (under this path libtorch_gpu.so exsit), and it gives core dumped.
So I think it is because of the difference of the pytorch version (Jetson offers docker with pytorch 1.11.0, while nnstreamer-pytorch requires pytorch 1.10.2). NVIDIA and pytorch doesn't provide pytorch 1.10.2 .whl for arrch64.
I also saw this link of question about re-building nnstreamer-pytorch. https://lists.lfaidata.foundation/g/nnstreamer-technical-discuss/topic/which_version_of_pytorch_is/87447736
May I ask is it possible to build nnstreamer-pytorch with the pytorch 1.11.0 myself on Jetson? Could you pls explain how to do it? Sorry to disturb you so much.
Thanks!
https://discuss.pytorch.org/t/error-pytorch-is-not-linked-with-support-for-cuda-devices/103807,
#include <torch/script.h> // One-stop header.
#include <torch/torch.h> // More explicit include for full Torch functionality
#include <iostream>
#include <memory>
int main()
{
try
{
// Check if CUDA is available and the number of CUDA devices
bool isCudaAvailable = torch::cuda::is_available();
std::size_t devicesCount = torch::cuda::device_count();
std::cout << "CUDA devices count - " << devicesCount << '\n';
std::cout << (isCudaAvailable ? "CUDA available" : "CUDA NOT available") << '\n';
// Check if cuDNN is available
bool isCudnnAvailable = torch::cuda::cudnn_is_available();
std::cout << (isCudnnAvailable ? "CUDNN available" : "CUDNN NOT available") << '\n';
if (isCudaAvailable)
{
// Deserialize and move the model to GPU
torch::jit::script::Module module = torch::jit::load("../simple_dnn_pt110.torchscript.pt");
module.to(torch::kCUDA);
module.eval();
// Create and process a tensor on GPU
auto input = torch::randn({1, 3, 32, 32}, torch::device(torch::kCUDA));
}
}
catch (const c10::Error &e)
{
std::cerr << "torch error - \n"
<< e.what() << '\n';
return -1;
}
}
This is the CMakeList.txt
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(custom_ops)
find_package(Torch REQUIRED)
add_executable(example-app example-app.cpp)
target_link_libraries(example-app "${TORCH_LIBRARIES}")
set_property(TARGET example-app PROPERTY CXX_STANDARD 14)
The result is
CUDA devices count - 1
CUDA available
CUDNN available
Thanks for your response!
I have installed pytorch 1.11.0 cuda version on my Jetson AGX orin. I have checked the torch.cuda.is_available() and it gives true. But I think nnstreamer-pytorch only support pytorch 1.10.2.
Unless pytorch API has lost backward compatibility, 1.11.0 should be also supported.
Besides, I also double check the libtorch by building an example-app.cpp file as following, and it works fine.
I check the path /lib after I install nnstreamer-pytorch and there only exist libtorch.so, libtorch_cpu.so, libtorch_global_deps.so, libtorch_python.so, and there is no libtorch_gpu.so.
I link the libtorch_cpu.so of /usr/lib/nnstreamer/filters/libnnstreamer_filter_pytorch.so to /usr/local/lib/python3.8/dist-packages/torch/lib/libtorch_cpu.so (under this path libtorch_gpu.so exsit), and it gives core dumped.
If this "link" is a filesystem link (ln
), not toolchain's link (-l/-L
of toolchain options), yes, it won't work.
You may need to compile libnnstreamer_filter_pytorch.so with a Pytorch of yours available and linked by your toolchain. In other words, when you compile libnnstreamer_filter_pytorch.so, gcc ... -lpytorch
should be able to link to the Pytorch you intend to use.
If you want to see which .so files are linked to the given libnnstreamer_filter_pytorch.so, use ldd
.
So I think it is because of the difference of the pytorch version (Jetson offers docker with pytorch 1.11.0, while nnstreamer-pytorch requires pytorch 1.10.2). NVIDIA and pytorch doesn't provide pytorch 1.10.2 .whl for arrch64.
I also saw this link of question about re-building nnstreamer-pytorch. https://lists.lfaidata.foundation/g/nnstreamer-technical-discuss/topic/which_version_of_pytorch_is/87447736
May I ask is it possible to build nnstreamer-pytorch with the pytorch 1.11.0 myself on Jetson? Could you pls explain how to do it? Sorry to disturb you so much.
Thanks!
https://discuss.pytorch.org/t/error-pytorch-is-not-linked-with-support-for-cuda-devices/103807,
#include <torch/script.h> // One-stop header. #include <torch/torch.h> // More explicit include for full Torch functionality #include <iostream> #include <memory> int main() { try { // Check if CUDA is available and the number of CUDA devices bool isCudaAvailable = torch::cuda::is_available(); std::size_t devicesCount = torch::cuda::device_count(); std::cout << "CUDA devices count - " << devicesCount << '\n'; std::cout << (isCudaAvailable ? "CUDA available" : "CUDA NOT available") << '\n'; // Check if cuDNN is available bool isCudnnAvailable = torch::cuda::cudnn_is_available(); std::cout << (isCudnnAvailable ? "CUDNN available" : "CUDNN NOT available") << '\n'; if (isCudaAvailable) { // Deserialize and move the model to GPU torch::jit::script::Module module = torch::jit::load("../simple_dnn_pt110.torchscript.pt"); module.to(torch::kCUDA); module.eval(); // Create and process a tensor on GPU auto input = torch::randn({1, 3, 32, 32}, torch::device(torch::kCUDA)); } } catch (const c10::Error &e) { std::cerr << "torch error - \n" << e.what() << '\n'; return -1; } }
This is the CMakeList.txt
cmake_minimum_required(VERSION 3.0 FATAL_ERROR) project(custom_ops) find_package(Torch REQUIRED) add_executable(example-app example-app.cpp) target_link_libraries(example-app "${TORCH_LIBRARIES}") set_property(TARGET example-app PROPERTY CXX_STANDARD 14)
The result is
CUDA devices count - 1 CUDA available CUDNN available
Hi team,
I am running tensor_filter with framework=pytorch on Jetson AGX orin (it contains a GPU), and I generated the pytorch model with pytorch 1.3.1. The command I am running is as following:
gst-launch-1.0 filesrc location=rgb.jpg ! jpegdec ! videoconvert ! videoscale ! video/x-raw, format=RGB, width=100, height=100 ! tensor_converter ! tensor_transform mode=transpose option=1:2:0:3 ! tensor_filter framework=pytorch model=simple_dnn.torchscript.pt input=32:32:3:1 inputtype=float32 inputname=input output=32:32:4:1 outputtype=float32 ! tensor_sink name=tensor_sink
I got the response like this, which shows the gpu is not used. I double check the gpu utilization to confirm the gpu is not used.
I also check previous issue https://github.com/nnstreamer/nnstreamer/issues/3543 and it shows tflite doesn't support NVIDIA GPU. Then what about nnstreamer-pytorch? Does it support NVIDIA GPU? Or I did something wrong?
Thanks!
** Message: 11:27:39.846: gpu = 0, accl = cpu Setting pipeline to PAUSED ... Pipeline is PREROLLING ... Pipeline is PREROLLED ... Setting pipeline to PLAYING ... New clock: GstSystemClock Got EOS from element "pipeline0". Execution ended after 0:00:00.000192769 Setting pipeline to NULL ... Freeing pipeline ...