sergiomsilva / alpr-unconstrained

License Plate Detection and Recognition in Unconstrained Scenarios
Other
1.71k stars 607 forks source link

TensorRT Implementation for Edge devices #111

Closed AsharFatmi closed 4 years ago

AsharFatmi commented 4 years ago

@sergiomsilva How can I convert my trained/fine-tuned License plate detector in a TRT engine to deploy it on an edge device like Jetson nano or Jetson TX2 .

Currently, on TX2, run time is not acceptable when considering edge devices.

TensorRT implementation will help in reducing the runtime.

Please let me know if it is at all possible to implement TensorRT for this License Plate detection. As the layers supported by TensorRT are limited

Thanks in Advance.

tudorikas commented 4 years ago

Hi, I think that for Yolo part you can use this github link https://github.com/jkjung-avt/tensorrt_demo. Good luck!

AsharFatmi commented 4 years ago

@tudorikas I am looking for the License Plate detection part.

Yolo is being used for vehicle detection and OCR.

tudorikas commented 4 years ago

ok, for detection Licence Plate you can change his code by training an yolo network on this set https://storage.googleapis.com/openimages/web/visualizer/index.html?set=train&type=detection&c=%2Fm%2F01jfm_ .

AsharFatmi commented 4 years ago

Yolo network does not give a perfect Bounding box just around the license plate instead it will give some extra region like bumpers in the region.

Since YOLO takes a format of [left, top, width, height]. Whereas this Keras model takes the box with just the license plate as in training it takes all the 4 coordinates of the LP region.

any thoughts?

shuangyichen commented 4 years ago

All the layers that WPOD Net has are supported by TensorRT.

AsharFatmi commented 4 years ago

@shuangyichen

Do you have any idea as to how to convert the model to TensorRT.

I cannot convert to uff model as merge, switch, etc layers are not supported.

While converting to onnx I have to provide a fixed input shape do you know any workaround for this?

can i edit the model json file for the input shape ? without having to retrain the model?

shuangyichen commented 4 years ago

@shuangyichen Do you have any idea as to how to convert the model to TensorRT. I cannot convert to uff model as merge, switch, etc layers are not supported. While converting to onnx I have to provide a fixed input shape do you know any workaround for this? can i edit the model json file for the input shape ? without having to retrain the model?

Input shape :(1,3,480,768) I have already finished license-plate-detection/train part in pytorch. And I tried to convert pt to onnx, and then to trt.

AsharFatmi commented 4 years ago

Can I just edit the resulting model.json file for the input shape instead of retraining the model?

shuangyichen commented 4 years ago

@AsharFatmi You can try it. I tried to convert keras model to TensorRT before but I failed. Most examples online are from torch to onnx to trt. And there are many tools for torch-to-trt(though not necessarily easy to use

AsharFatmi commented 4 years ago

@AsharFatmi You can try it. I tried to convert keras model to TensorRT before but I failed. Most examples online are from torch to onnx to trt. And there are many tools for torch-to-trt(though not necessarily easy to use

Can you share some links regarding the torch to trt conversions?

shuangyichen commented 4 years ago

@AsharFatmi 1. https://github.com/NVIDIA-AI-IOT/torch2trt I havent tried this. 2.A more common idea is to torch.onnx.export first to generate onnx model 2.1 then use onnx-tensorrt to genarate trt engine. 2.2 then use TensorRT python API to convert onnx to trt. (examples in TensorRT/samples/python/yolov3_onnx

AsharFatmi commented 4 years ago

@shuangyichen Thank you for all the help. I will try a couple of different ways.

AsharFatmi commented 4 years ago

I was able to convert the model to TensorRT. Keras to onnx using keras2onnx library. onn2 to tensorRT using onnx2trt library.

But when I run inference on the .trt model the output from the model is different from the keras model.

KERAS Model OUTPUT:-

[[[ 5.35682165e-16  1.00000000e+00  1.02072847e+00 ... -2.07958460e-01
    5.04563272e-01 -1.20855430e-02]
  [ 8.77869645e-17  1.00000000e+00  7.72716165e-01 ... -1.96673796e-01
    6.91386938e-01 -5.71288705e-01]
  [ 1.62569552e-29  1.00000000e+00  1.13321781e+00 ... -1.17423847e-01
    7.27890730e-01 -2.67761678e-01]
  ...
  [ 9.93481616e-11  1.00000000e+00  8.08987558e-01 ... -1.09358504e-01
    3.96312863e-01 -3.76817733e-02]
  [ 5.05836795e-08  1.00000000e+00  6.38040662e-01 ...  2.58072745e-03
    4.28376526e-01 -3.47221732e-01]
  [ 2.63872035e-09  1.00000000e+00  8.12149942e-01 ... -1.26684764e-02
    1.86289668e-01 -6.14305809e-02]]

 [[ 1.15013890e-12  1.00000000e+00  1.06367087e+00 ... -5.53396583e-01
    3.23190212e-01 -1.00925833e-01]
  [ 7.17620210e-18  1.00000000e+00  8.54213297e-01 ...  4.44860458e-01
   -6.65633440e-01 -1.41143531e-01]
  [ 4.54133458e-24  1.00000000e+00  1.19984531e+00 ...  1.20644540e-01
   -3.73397738e-01 -6.67934895e-01]
  ...
  [ 1.03462021e-08  1.00000000e+00  1.14307857e+00 ... -1.53487191e-01
    2.21320093e-01 -2.01362371e-01]
  [ 1.97145482e-08  1.00000000e+00  9.24914479e-01 ...  3.22847784e-01
   -6.38035834e-02 -2.14298382e-01]
  [ 1.70783192e-08  1.00000000e+00  9.65804636e-01 ...  4.09364104e-02
    2.28743017e-01 -3.36741388e-01]]

 [[ 8.94335161e-11  1.00000000e+00  7.75921345e-01 ...  1.04476353e-02
    1.51742458e-01  5.28428927e-02]
  [ 9.04731773e-16  1.00000000e+00  4.41512674e-01 ...  1.06468916e-01
   -8.99115801e-02  2.96098143e-01]
  [ 2.10258343e-17  1.00000000e+00  2.47518599e-01 ...  7.37828463e-02
   -1.92303687e-01  4.42358196e-01]
  ...
  [ 2.41078150e-16  1.00000000e+00  8.20632398e-01 ... -1.57306686e-01
    1.35591239e-01 -4.33367584e-03]
  [ 1.30340170e-13  1.00000000e+00  6.97541475e-01 ... -1.88214704e-01
    8.80702883e-02 -1.52649814e-02]
  [ 5.57648372e-09  1.00000000e+00  9.97833073e-01 ... -1.55004978e-01
    2.76494384e-01 -6.27942905e-02]]

 ...

 [[ 0.00000000e+00  1.00000000e+00  8.85395885e-01 ...  3.80443633e-02
    4.81112301e-01 -1.28768587e+00]
  [ 0.00000000e+00  1.00000000e+00  1.00841582e+00 ... -2.23223472e+00
   -3.39565903e-01 -5.87081552e-01]
  [ 0.00000000e+00  1.00000000e+00  1.37934232e+00 ... -1.70936894e+00
    2.07115859e-01 -2.22187567e+00]
  ...
  [ 2.83864250e-25  1.00000000e+00  1.45969176e+00 ... -7.09485292e-01
    2.28834689e-01 -3.29626322e-01]
  [ 1.24496172e-15  1.00000000e+00  1.31729507e+00 ... -6.66390240e-01
    1.25542924e-01 -1.47483870e-01]
  [ 2.48216048e-09  1.00000000e+00  1.15188110e+00 ... -1.47737026e-01
    3.60481203e-01 -2.20017970e-01]]

 [[ 0.00000000e+00  1.00000000e+00  1.48687065e-01 ... -8.53215158e-01
   -1.15160823e+00  8.59936476e-01]
  [ 0.00000000e+00  1.00000000e+00  6.86580241e-01 ...  2.05904245e-02
    3.43396455e-01 -6.00309968e-01]
  [ 0.00000000e+00  1.00000000e+00 -1.29906058e-01 ...  8.33660722e-01
    6.93807364e-01  5.96281648e-01]
  ...
  [ 2.00734995e-30  1.00000000e+00  8.45168352e-01 ...  4.50236648e-02
    3.50932181e-01 -7.83286393e-02]
  [ 2.55876199e-19  1.00000000e+00  9.60333586e-01 ...  2.57166326e-02
    5.82121909e-02 -9.72378105e-02]
  [ 4.64498925e-12  1.00000000e+00  9.78434443e-01 ...  6.36013597e-02
    3.79364163e-01 -3.65832038e-02]]

 [[ 0.00000000e+00  1.00000000e+00 -1.19233370e-01 ... -3.39587510e-01
    8.02178085e-02  1.38728428e+00]
  [ 0.00000000e+00  1.00000000e+00 -2.54251719e-01 ...  9.89254951e-01
   -1.02365923e+00  1.74043214e+00]
  [ 0.00000000e+00  1.00000000e+00  6.75196886e-01 ...  1.14767244e-02
    8.67833734e-01  5.49621701e-01]
  ...
  [ 1.01715141e-30  1.00000000e+00  1.23101699e+00 ... -3.87017936e-01
    1.34929419e-01  1.46293178e-01]
  [ 5.34217413e-22  1.00000000e+00  1.08855963e+00 ... -6.93247654e-04
    1.25452936e-01  7.26785064e-02]
  [ 2.12459472e-13  1.00000000e+00  9.25891340e-01 ... -5.17582297e-02
    1.55512884e-01 -1.80828243e-01]]]

TRT model OUPUT

[ 0. 1. 148.38953 ... -299.67487 -987.18317 254.93916]

Any idea why this is??

shuangyichen commented 4 years ago

@AsharFatmi I have the same problem. I found the onnx model parsing was not successful. In my case, my trt engine only has the first conv layer. You can check your trt engine. If you solved the problem, Please tell me.

AsharFatmi commented 4 years ago

@AsharFatmi I have the same problem. I found the onnx model parsing was not successful. In my case, my trt engine only has the first conv layer. You can check your trt engine. If you solved the problem, Please tell me.

I cannot open the trt engine in neutron to check. How did you check your engine.

shuangyichen commented 4 years ago

https://github.com/NVIDIA/TensorRT/issues/375 Please try the way he mentioned.

AsharFatmi commented 4 years ago

I did what he said, Getting the same thing as you only have one layer in the resulting trt file.

{
    "0": {
        "inputs": {
            "0": {
                "dtype": "DataType.FLOAT",
                "name": "'input'",
                "shape": "(600, 845, 3)"
            }
        },
        "name": "'(Unnamed Layer* 0) [Shuffle]'",
        "num_inputs": "1",
        "num_outputs": "1",
        "outputs": {
            "0": {
                "dtype": "DataType.FLOAT",
                "name": "'adjusted_input25'",
                "shape": "(3, 600, 845)"
            }
        },
        "precision": "DataType.FLOAT",
        "precision_is_set": "False",
        "type": "LayerType.SHUFFLE"
    }
}

The error that I am getting is a bit different than yours.

kingashar@kingashar:~/tensorRT$ python3 network_check.py 
Loading ONNX file from path ~/YOLOV3-LP/keras_input_model.onnx...
Beginning ONNX file parsing
In node 1 (importModel): INVALID_GRAPH: Assertion failed: tensors.count(input_name)
Writing /home/kingashar/network.json
Completed parsing of ONNX file
Building an engine from file ~/YOLOV3-LP/keras_input_model.onnx; this may take a while...
Completed creating Engine
shuangyichen commented 4 years ago

You are using TensorRT5? I got the same problem when I use TensorRT5. You can try TensorRT7.

AsharFatmi commented 4 years ago

I am using TensorRT 6.0.1.5 i will try using TensorRT 7.

Did it work for you??

shuangyichen commented 4 years ago

No...Still working on it I tried my onnx model that keras2onnx generate, It shows: In node -1 (importInput): UNSUPPORTED_GRAPH: Assertion failed: convertOnnxDims(onnxDtype.shape().dim(), trt_dims)

AsharFatmi commented 4 years ago

Let me know if you are able to crack it. I will do the same.

I have similar error:-

In node 1 (importModel): INVALID_GRAPH: Assertion failed: tensors.count(input_name)

AsharFatmi commented 4 years ago

No...Still working on it I tried my onnx model that keras2onnx generate, It shows: In node -1 (importInput): UNSUPPORTED_GRAPH: Assertion failed: convertOnnxDims(onnxDtype.shape().dim(), trt_dims)

Maybe the input shape is the issue for you as you mentioned earlier:-

@shuangyichen Do you have any idea as to how to convert the model to TensorRT. I cannot convert to uff model as merge, switch, etc layers are not supported. While converting to onnx I have to provide a fixed input shape do you know any workaround for this? can i edit the model json file for the input shape ? without having to retrain the model?

Input shape :(1,3,480,768) I have already finished license-plate-detection/train part in pytorch. And I tried to convert pt to onnx, and then to trt.

for keras the format is (Batch, width, hieght, 3).

Maybe try the process with (1, 480, 768, 3) .. Good Luck

shuangyichen commented 4 years ago

@AsharFatmi Sorry, of course the input shape for keras model is (1,480,768,3), what I mentioned before is input shape for torch model. And I remember that when converting toch to onnx, input shape is needed. But when converting keras model to onnx, I just used keras2onnx. Could you plz share your keras2onnx code, and onnx model in Netron? Thank you!

AsharFatmi commented 4 years ago

Yeah, I meant input of the Keras model initially. The input layer of the Keras model.

Like for eg in the file attached, I have specified the input layer shape.

model_config.txt

Keras to onnx code:-

import keras2onnx
import onnxruntime
from src.keras_utils import load_model

model = load_model('/home/kingashar/YOLOV3-LP/lp_detector/uae-1392_final.h5')

onnx_model = keras2onnx.convert_keras(model, name=None, doc_string='', target_opset=None, channel_first_inputs=None)

temp_model_file = 'keras_input_model_JK.onnx'
keras2onnx.save_model(onnx_model, temp_model_file)
sess = onnxruntime.InferenceSession(temp_model_file)
print('DONE')

I am not able to export the png file from neutron.

shuangyichen commented 4 years ago

Sorry, I still dont get it. Where should I specifeid the input shape? In the onnx-to-trt?

shuangyichen commented 4 years ago

My keras2onnx model in Netron shows that input shape is (N00*3), maybe that's the problem. But I still dont know where I should specify the input shape.

AsharFatmi commented 4 years ago

How have you saved your Keras model? I have saved the weights and the config separately so, I specified the input shape in the config file.

My keras2onnx model in Netron shows that input shape is (N_0_0*3), maybe that's the problem. But I still dont know where I should specify the input shape.

I believe that is the issue.

I have attached screenshot of my Netron and json config file below for reference.

Screenshot from 2020-02-13 14-15-42 Screenshot from 2020-02-13 14-25-18

Keras model Screenshot from 2020-02-13 14-33-38

AsharFatmi commented 4 years ago

@shuangyichen Did it work ??

AsharFatmi commented 4 years ago

@shuangyichen

Now my TRT model is working. Thank you for your help I solved this issue.

PiyalGeorge commented 4 years ago

Hi, I'm trying to convert the 'license-plate-detection' code to c++ code. But to convert the whole keras code to c++ seems like huge work. Instead i'm trying to convert the keras model(json and h5 files) to tensorflow model. (my plan is -then use the new tensorflow model in tensorflow based c++ code). In below code i have converted the keras model to tensorflow model. Then i try to make object detection using '.pb file' and below code. The output from prediction is a matrix with shape - (1, 42, 64, 8). Is there way i can get detection box coordinates? Please help me.

from keras import backend as K
K.set_learning_phase(0)
import tensorflow as tf
from keras.models import model_from_json

jsonfile = open('wpod-net_update1.json', 'r')
model_json = jsonfile.read()
jsonfile.close()

model = model_from_json(model_json)
model.load_weights('wpod-net_update1.h5')

print(model.outputs)
#Our Output tensor : [<tf.Tensor 'concatenate_1/concat:0' shape=(?, ?, ?, 8) dtype=float32>]
print(model.inputs)
#Our Input tensor : [<tf.Tensor 'input:0' shape=(?, ?, ?, 3) dtype=float32>]

def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
    graph = session.graph
    with graph.as_default():
        freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
        output_names = output_names or []
        output_names += [v.op.name for v in tf.global_variables()]
        # Graph -> GraphDef ProtoBuf
        input_graph_def = graph.as_graph_def()
        if clear_devices:
            for node in input_graph_def.node:
                node.device = ""
        #frozen_graph = convert_variables_to_constants(session, input_graph_def, output_names, freeze_var_names)
        frozen_graph = tf.compat.v1.graph_util.convert_variables_to_constants(session, input_graph_def, output_names, freeze_var_names)
        return frozen_graph

frozen_graph = freeze_session(K.get_session(), output_names=[out.op.name for out in model.outputs])
tf.train.write_graph(frozen_graph, "latest-pbfile", "wpod_tf_model.pb", as_text=False)

#Above code converted the keras to tensorflow pb file
#Below code is i'm trying to use on image to get detection coordinates

import os
import cv2 
import numpy as np 
import sys
from tensorflow.python.platform import gfile

sess=tf.InteractiveSession()
f = gfile.FastGFile("wpod_tf_model.pb", 'rb')
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
f.close()

sess.graph.as_default()
tf.import_graph_def(graph_def)

output_tensor = sess.graph.get_tensor_by_name('import/concatenate_1/concat:0')
#Our Output tensor : [<tf.Tensor 'concatenate_1/concat:0' shape=(?, ?, ?, 8) dtype=float32>]

image = cv2.imread("car.png")

image_expanded = np.expand_dims(image, axis = 0)

predictions = sess.run(softmax_tensor, {'import/input:0': image_expanded})

print(predictions)
print(predictions.shape)

Output from above code:

[[[[ 0.00000000e+00 1.00000000e+00 -3.34710617e+01 ... 1.94643288e+01 3.68301353e+01 9.24516773e+00] [ 0.00000000e+00 1.00000000e+00 2.37034202e+00 ... -2.75982304e+01 3.61542473e+01 5.73606300e+00] ... [ 0.00000000e+00 1.00000000e+00 1.56508980e+01 ... -5.42593575e+00 1.62256050e+01 -1.86855240e+01]]

[[ 0.00000000e+00 1.00000000e+00 -7.79394388e+00 ... 3.09290199e+01 8.63655853e+01 1.36856709e+01] [ 0.00000000e+00 1.00000000e+00 2.93666458e+01 ... -3.08292675e+01 1.09749474e+02 2.12900391e+01] [ 0.00000000e+00 1.00000000e+00 4.60487442e+01 ... -4.87868652e+01 1.08859711e+02 5.81501389e+00] ... [ 0.00000000e+00 1.00000000e+00 4.77344284e+01 ... -6.47512360e+01 1.09733971e+02 -2.87336864e+01]

[ 0.00000000e+00 1.00000000e+00 4.25194588e+01 ... -3.57531586e+01 4.50154076e+01 -3.05396996e+01]] ... [[ 5.99232838e-18 1.00000000e+00 1.49527061e+00 ... 5.97574651e-01 -2.52342284e-01 -1.89448029e-01] ... [ 1.31391238e-17 1.00000000e+00 1.93043900e+00 ... 7.44476855e-01 -7.56041348e-01 8.58260393e-02]]]] (1, 42, 64, 8)

arjunbinu commented 4 years ago

You are using TensorRT5? I got the same problem when I use TensorRT5. You can try TensorRT7.

Hi i'm using TensorRT5.1.6 in my jetson TX2 (jetpack 4.2.2) to keras model to onnx and then create a tensorrt engine. I managed to convert to onnx using keras2onnx as mentioned by @AsharFatmi but with some kind of warning like given below git Then i decide to give a try and ran onnx_to_tensorrt.py mentioned above by @AsharFatmi .But it resulted in the following errors

2020-04-07 14:22:32 - main - INFO - TRT_LOGGER Verbosity: Severity.ERROR Traceback (most recent call last): File "onnx_to_tensorrt.py", line 141, in main() File "onnx_to_tensorrt.py", line 79, in main with trt.Builder(TRT_LOGGER) as builder, builder.create_network(network_flags) as network, trt.OnnxParser(network, TRT_LOGGER) as parser: TypeError: create_network(): incompatible function arguments. The following argument types are supported:

  1. (self: tensorrt.tensorrt.Builder) -> tensorrt.tensorrt.INetworkDefinition

Invoked with: <tensorrt.tensorrt.Builder object at 0x7f5ee71228>, 0

I would really appreciate a solution to convert it to trt using the existing TensorRT5.

makaveli10 commented 4 years ago

@shuangyichen Thanks for sharing your work!! I am using your repo to convert model to TRT but the results differ for keras and TRT. Have you been able to successfully convert keras model to TRT? Please share some insights on how to get correct results from TRT engine.

shuangyichen commented 4 years ago

@shuangyichen Thanks for sharing your work!! I am using your repo to convert model to TRT but the results differ for keras and TRT. Have you been able to successfully convert keras model to TRT? Please share some insights on how to get correct results from TRT engine.

You mean output size or ..?

makaveli10 commented 4 years ago

@shuangyichen I mean the outputs(values) of keras and TRT do not match! Does the code in TensorRT 7 branch work for you!! First of all, I had to change the input shape in model.json form [1, null, null, 3] to [1, 600, 845, 3] for the onnx to TRT conversion to work. But the outputs for keras / TRT are different. Thanks!

makaveli10 commented 4 years ago

@shuangyichen If the repo is not tested please let me know! Thanks. I havent been able to make much progress in getting correct results from TensorRT using your code. So, yeah I could use your help

shuangyichen commented 4 years ago

@shuangyichen If the repo is not tested please let me know! Thanks. I havent been able to make much progress in getting correct results from TensorRT using your code. So, yeah I could use your help

Sure it works. My keras output and TRT output are almost the same, only the exact number of decimal places is different

makaveli10 commented 4 years ago

@shuangyichen okay! Sure I must have been doing something wrong! I'll try again. Can you share the script that you used to verify numerical results please?

shuangyichen commented 4 years ago

@makaveli10 I just subtract two outputs. And the result was like x.xx e-14. So I default to two outputs equal.

makaveli10 commented 4 years ago

@shuangyichen and you used the same pretrained models as in this repo?

shuangyichen commented 4 years ago

@makaveli10 Sure

bhargavravat commented 3 years ago

@shuangyichen @AsharFatmi ,

Hi !

Have you guys succeeded to convert OCR-NET model in to tenssorrt model (ONNX / TRT ) ??

AsharFatmi commented 3 years ago

@bhargavravat Hi,

Yes i was able to convert it. You need darknet2onnx , pytorch2onnx to do this task. https://github.com/Tianxiaomo/pytorch-YOLOv4 you can refer to the above link

bhargavravat commented 3 years ago

Hey @AsharFatmi :

I tried running the code , but it is giving me error.

RuntimeError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat. This usually means that this function requires a non-empty list of Tensors. Available functions are [CPU, CUDA, QuantizedCPU, Autograd, Profiler, Tracer, Autocast]

image image