Open edmuthiah opened 2 years ago
Going on two years here, but this would be awesome! The important thing here is that downloading weights and compiling would all happen at the build stage. As opposed to using pytorch compile requiring JIT compilation to happen when the setup function is called.
Also happy to contribute.
Any update on this?
TensorRT is used in production ML systems. However, it adds another layer to the dependency hell across tensorrt/python/cuda/cudnn versions.
Right now the cleanest solution seems to be using the NVIDIA NGC provided container. It would be great to support this in your framework. Happy to contribute.
TensorRT docs:
TensorRT NGC Container docs:
@bfirsh