Closed aafaqin closed 2 years ago
I can think of the following scenarios:
Compile the plugin on a x86 PC but try to run it on ARM64 (Jetson), or vice versa -> This obviously won't work since the instructions sets are different.
Compile and run the plugin on same computer architectures, but with different versions of CUDA libraries or TensorRT libraries. -> This is the case you ran into. I think it's not guaranteed to work. You have to try and see the result.
Compile and run the plugin on same computer architectures, and with the same versions of CUDA libraries or TensorRT libraries. The computers might have different GPUs. -> I think this should work. You just need to set "GPU compute" properly in the Makefile (covering all GPUs you're going to use). https://github.com/jkjung-avt/tensorrt_demos/blob/f49f1f75ac39efe610b7bf06b8ba5843e57023a3/plugins/Makefile#L7-L8
I have tried doing it and it and i got no errors. But just curious to know is it a correct way to do? This was I can use the same 'libyolo_layer.so' file and the extra step can be reduced for me to deploy the same application accross multiple devices easily.
It does give this warning though-