antabangun / coex

GNU General Public License v3.0
142 stars 19 forks source link

TensorRT model #8

Closed kaleab-k closed 1 year ago

kaleab-k commented 2 years ago

Hello @antabangun,
Thank you for the great work! Have you tried converting the torchscript model to tensorrt? If so, can you please share either the model or how you did it? I tried it with torch2trt but it fails since some of the layers are not supported. Thank you!

antabangun commented 2 years ago

Hi @kaleab-k , Thanks for your interest! I have not yet tried converting the model to TensorRT. I was thinking of using the newly added official Torch-TensorRT, but haven't had the time to do so. I can maybe try to do it and give you an update, hopefully by this week :)

kaleab-k commented 2 years ago

Great, thanks a lot!!

antabangun commented 2 years ago

Hi @kaleab-k ,

I tried following the torch_tensorrt tutorial, and managed to convert the model partially, which I've pushed as new updates.

However, I encountered issues when converting the Regression portion of the code due to some tensor type compatibility problem, which I could not find the solution for. So currently I simply use tensorrt to compute the final cost volume and superpixel features, then perform the regression using pytorch. I also found the pytorch unfold operation to not be supported, so I've modified some parts of the models.

For now, you can install the torch_tensorrt module along with TensorRT 8.4.1.5. This is compatible with cudatoolkit 11.3, so I've also updated the environment.yml with the compatible package versions as well, so be sure to update the environment. Then, we can perform torch to tensorrt conversion followed by the inference by running the

python torch_to_tensorrt.py

command, which also saves the tensorrt converted model. But unfortunately I also encountered a problem in loading the saved tensorrt engine.

I hope this is helpful, and kindly let me know if you know or find a solution for the problems I mentioned.

antabangun commented 1 year ago

I will be closing this issue now