linghu8812 / tensorrt_inference

707 stars 208 forks source link

Model conversion on Multi-GPU computer #103

Open Zhong-zh13 opened 3 years ago

Zhong-zh13 commented 3 years ago

Hello and thank you for your wonderful tool. It went so well! I just have a small question.

Now I have 4 GPUs on my computer. Since TensorRT is more of a device specific thing, how could we select the GPU device to use when running the c++ program for ONNX->TensorRT conversion?

I tried to search in the codes but failed to find the correct part.

Thank you very much

linghu8812 commented 3 years ago

a simple way: use CUDA_VISIBLE_DEVICE

Zhong-zh13 commented 3 years ago

@linghu8812 Thank you for your reply! I've already known how to modify the c++ code to achieve that. 再次感谢您的工具