Closed robotichustle closed 3 years ago
Hey @robotichustle,
Thank you for using our code.
The catkin build TypeError is a Python-related issue, see more here. I'm not sure whether this is related to the TRT model problem or not.
For the TRT model problem, it should be a TensorRT-related issue. We never encountered this error by ourselves and it's hard for us to reproduce it and debug it.
However, there are some similar issues reported by others. You may have a look there #15 #22.
Hey @robotichustle, is there any update on this issue?
@robotichustle Previously I was using CUDA10.1+cudnn7.6.5+trt5.1.5 with NVIDIA P1000, and I ran into exactly the same problem. I tried downgrading CUDA10.1 to CUDA10.0 and it works well now. I also tested this workaround with another device with GTX 1060, and it works well too. I suggest maybe trt5.1.5 is somehow incompatible with CUDA version higher than 10.0?
@ZhenminHuang Thank you so much for the feedback!
Since there is no further update about this issue, I'm going to close it. Please feel free to ask me to reopen it if needed.
check you cublas version == cuda version
I also tested with GTX1660 + cuda10.0 + cudnn7.5.0 + TensorRT5.1.5.0, and it works well.
hi, @GuoFeng-X , 你的显卡内存是多少呢
I have the same problem, is anybody can propose a solution?
I have the same problem, is anybody can propose a solution?
Thanks for your work. I am really excited to demo it but I'm unable to run the demo.
I am using Ubuntu 18.04 + Cuda 10.1 + CuDNN 7.5.1 + TensorRT 5.1.5 (as per your README and compatibility of versions from Nvidia). GPU is GeForce GTX 1060.
I get the following runtime error as it can't generate the TRT file from the SemanticKitti darknet53 model:
I am not sure if it matters but the catkin build process passed but I got some warnings. Here is the output:
Also, not sure if this is relevant but running the
sample_onnx_mnist
seems to work fine.Success:
I'd appreciate your help. Thank you in advance.