Closed hofbi closed 5 months ago
They are not in cuda toolkit, aka, seperate nvidia product build on top of cuda toolkit. So you need to wrap them by yourself.
Would it make sense to put some documentation for a best practice way here into the repo/examples? I expect people searching for this might end up here.
# In your WORKSPACE.bazel
load("//third_party/cudnn:deps.bzl", "cudnn_dependencies")
cudnn_dependencies()
The code is extracted from a personal private repo and is rather outdated, you might want to tweak it to fit your need. Just use it as public domain, WTFPL if you want.
For trt, you should follow the similar way to import the library.
Is there a recommended way on how to use
cudnn
andtensorrt
? Is this something that should be supported by rules_cuda?