Closed y-okumura-isp closed 3 years ago
Thanks for reporting the issue. Below is the relevant source code, and the relevant documentation is here: IBuilderConfig. I followed NVIDIA's API and set DLA_Core
in the TensorRT Builder Config. I'm not sure why it does not work. I think you could re-direct this question to NVIDIA.
Thank you for your comment. It looks the code is correct. I try to post NVIDIA Forum. By the way, can anyone reproduce this situation? How about on boards except for NX?
I have the same question and find the solution on my device . I try config->setDLACore(0) and config->setDLACore(1), but the model always run on DLA 0. Then, i try setDLACore at runtime and the model can run on the correct DLA.
In my understanding, we can use DLA core 1 by building the model, but we can not specify the core on runtime. https://github.com/jkjung-avt/tensorrt_demos/issues/394 Though I set
--dla_core 1
at build time, it looks DLA Core 0 is used.Here is my environment.
/etc/nv_tegra_release
: R32 (release), REVISION: 5.1I built the tensorrt model as following:
And I checked which DLA core is used: We can do this by checking
/sys/.../runtime_status
according to https://forums.developer.nvidia.com/t/matrix-multiply-on-dla-and-checking-dla-usage/84338/2I got the result as below. It looks DLA Core 0 is used.
All suggestions are welcome. Thanks.