Open angry-crab opened 1 year ago
@ambroise-arm Do you have any comment on this? Thanks.
I don't immediately see how the CUDA enablement of tvm_vendor
would work. As it currently is, it gets compiled and distributed as a ros-${ROS_DISTRO}-tvm-vendor
package.
Unless we are considering tvm_vendor
becoming part of the Autoware sources. Which is possible, but we have to keep in mind that it would probably become one of the longest packages to compile.
I don't immediately see how the CUDA enablement of
tvm_vendor
would work. As it currently is, it gets compiled and distributed as aros-${ROS_DISTRO}-tvm-vendor
package. Unless we are consideringtvm_vendor
becoming part of the Autoware sources. Which is possible, but we have to keep in mind that it would probably become one of the longest packages to compile.
Sorry for the confusion. What I'm say is that we could detect CUDA before these lines and set ENABLE_CUDA
according to the result.
https://github.com/autowarefoundation/tvm_vendor/blob/9e1accfa9477ac691c1ca2f02427b9a588a7d910/CMakeLists.txt#L49-L52
I'm not trying to put tvm_vendor
in to universe
.
What I'm say is that we could detect CUDA before these lines and set ENABLE_CUDA according to the result.
But we would be detecting if CUDA is present on the CI runner creating the ros-${ROS_DISTRO}-tvm-vendor
package, not on the end user machine. And what I was mentioning here https://github.com/autowarefoundation/autoware.universe/issues/2186#issuecomment-1368734108 is that if the package is built with CUDA support, then the user is required to have CUDA support on his machine in order to run an inference on ANY backend, including non-CUDA ones (unless TVM removed this limitation since the last time I tried it).
This is why I was mentioning putting tvm_vendor
in universe
, because this is the only way I see it working.
if the package is built with CUDA support, then the user is required to have CUDA support on his machine in order to run an inference on ANY backend, including non-CUDA ones (unless TVM removed this limitation since the last time I tried it).
I see. Let me check if that is still true.
@ambroise-arm
I compiled the model of lidar_apollo_segmentation_tvm
with cuda
support and ran llvm
backend without any problem in a humble-latest
image which does not have a GPU. I guess it seems okay then?
This is a followup issue from Enable OpenCL Backend for TVM
We may want to bring up CUDA backend for TVM for two reason:
And the reason it was not done in the previous issue, Enable OpenCL Backend for TVM is that I was not able to compile
Lidar CenterPoint
models due to some errors and I did not have time to look into the details. However, to proceed development, I believe it is necessary to enable CUDA.Regarding Comments from Ambroise, it is true that CUDA libraries need to be handled beforehand. I think we can try to detect the existence of CUDA components and patch
tvm_vendor
accordingly.