CoinCheung / BiSeNet

Add bisenetv2. My implementation of BiSeNet
MIT License
1.45k stars 309 forks source link

Failed to load onnx model into tensorRT #78

Closed zldodo closed 4 years ago

zldodo commented 4 years ago

Hi, thanks a lot for your nice works!

Currently, I attempt to train a BiSeNet model using my own data. The trainning code is directly taken from your repo. It works quite well during trainning and the evaluation result also seems fine. After that, I converted the torch model to onnx model, which was then converted to tersorRT. However, I got some errors during model converison.

ERROR: onnx2trt_utils.hpp:277 In function convert_axis: [8] Assertion failed: axis >=0 && axis < nbDims

I have no idea about how to fix it. Could you please introduce the details of model converison when you do this? It would be great if you could share them online.

Environment in use:

midasklr commented 4 years ago

Hi, thanks a lot for your nice works!

Currently, I attempt to train a BiSeNet model using my own data. The trainning code is directly taken from your repo. It works quite well during trainning and the evaluation result also seems fine. After that, I converted the torch model to onnx model, which was then converted to tersorRT. However, I got some errors during model converison.

  • torch to onnx

When it comes to torch.onnx.export, the conversion process run into errors if the default opset_verison = 9 is used. But if I set the opset_version as 11, no error showed up visibly. Although I've gotten an onnx model, I am just wandering that whether this also happened to you? If not, I would have a worry about the side effects behind this way. Perhaps it is the reason for the error I got during onnx2tensorRT conversion.

  • onnx to tensorRT

When it comes to onnxToTRTModel, an error came up in fornt as below:

ERROR: onnx2trt_utils.hpp:277 In function convert_axis: [8] Assertion failed: axis >=0 && axis < nbDims

I have no idea about how to fix it. Could you please introduce the details of model converison when you do this? It would be great if you could share them online.

Environment in use:

  • pytorch 1.1 during trainning
  • pytorch 1.3 for onnx model generation [because it failed using torch1.1]
  • onnx IR version 0.0.4
  • tensorRT 5.1.5, CUDA9.0

How is the performance of BiSeNet ? What's the fps in ur dataset? I am looking for a light weight segmentation net in my work...

zldodo commented 4 years ago

@midasklr , the inference time is around 12ms per image on Tesla V100. The size of image is 1024*640.

midasklr commented 4 years ago

@midasklr , the inference time is around 12ms per image on Tesla V100. The size of image is 1024*640.

nice~,I just convert a light weight refinenet to tensorrt fp16,but it still takes around 25 ms per 512*512 input...

jiaji-fang commented 4 years ago

Hi, thanks a lot for your nice works!

Currently, I attempt to train a BiSeNet model using my own data. The trainning code is directly taken from your repo. It works quite well during trainning and the evaluation result also seems fine. After that, I converted the torch model to onnx model, which was then converted to tersorRT. However, I got some errors during model converison.

  • torch to onnx

When it comes to torch.onnx.export, the conversion process run into errors if the default opset_verison = 9 is used. But if I set the opset_version as 11, no error showed up visibly. Although I've gotten an onnx model, I am just wandering that whether this also happened to you? If not, I would have a worry about the side effects behind this way. Perhaps it is the reason for the error I got during onnx2tensorRT conversion.

  • onnx to tensorRT

When it comes to onnxToTRTModel, an error came up in fornt as below:

ERROR: onnx2trt_utils.hpp:277 In function convert_axis: [8] Assertion failed: axis >=0 && axis < nbDims

I have no idea about how to fix it. Could you please introduce the details of model converison when you do this? It would be great if you could share them online.

Environment in use:

  • pytorch 1.1 during trainning
  • pytorch 1.3 for onnx model generation [because it failed using torch1.1]
  • onnx IR version 0.0.4
  • tensorRT 5.1.5, CUDA9.0

hi, i also train the model with my own data with a good result. but i can not convert model to onnx. so would you like to tell me how to convert it to onnx ? ths

CoinCheung commented 4 years ago

Hi,

I add a demo on how to export the model to onnx and compile with tensorrt. You can see if this will help you.

I am closing this. You can open new issue if you still have problem on this.