NVIDIA-AI-IOT / tf_to_trt_image_classification

Image classification with NVIDIA TensorRT from TensorFlow models.
BSD 3-Clause "New" or "Revised" License
455 stars 155 forks source link

how to convert my own slim model #14

Open jerryhouuu opened 6 years ago

jerryhouuu commented 6 years ago

hello, i want to use tensorrt to speed up the facenet project. davidsandberg/facenet

I freeze the model into .pb tensorflow modle, but I am confused about how can I convert it into .plan. and, I tried to modify the code similar to the model zoo' inception-resnet-v2 and re-train, but it could not be convert into an .uff model too.

ghost commented 6 years ago

Hi jerryhouuu,

The scripts in this repository are only tested against the models listed.

It may help to watch the TensorFlow to TensorRT on Jetson Webinar for more information on how to optimize TensorFlow models with TensorRT, and better understand the known limitations.

That said, If you are able to provide the frozen graph of the model you are attempting to convert I may be able to provide more detailed insight.

jerryhouuu commented 6 years ago

@jaybdub-nv Thank you for your reply. The error like below while I converted the frozen graph into uff model, could you give me some advise?

Converting to UFF graph
Warning: keep_dims is not supported, ignoring...
DEBUG: convert reshape to flatten node
Warning: No conversion function registered for layer: QueueDequeueUpToV2 yet.
Converting as custom op QueueDequeueUpToV2 batch_join
name: "batch_join"
op: "QueueDequeueUpToV2"
input: "batch_join/fifo_queue"
input: "batch_size"
attr {
  key: "component_types"
  value {
    list {
      type: DT_FLOAT
      type: DT_INT64
    }
  }
}
attr {
  key: "timeout_ms"
  value {
    i: -1
  }
}

My batch training code like bellow:

        image_paths_placeholder = tf.placeholder(tf.string, shape=(None,1), name='image_paths')
        labels_placeholder = tf.placeholder(tf.int32, shape=(None,1), name='labels')
        input_queue = data_flow_ops.FIFOQueue(capacity=100000,
                                    dtypes=[tf.string, tf.int32],
                                    shapes=[(1,), (1,)],
                                    shared_name=None, name=None)
        enqueue_op = input_queue.enqueue_many([image_paths_placeholder, labels_placeholder], name='enqueue_op')

        nrof_preprocess_threads = 4
        images_and_labels = []
        for _ in range(nrof_preprocess_threads):
            filenames, label = input_queue.dequeue()
            images = []
            for filename in tf.unstack(filenames):
                file_contents = tf.read_file(filename)
                image = tf.image.decode_image(file_contents, channels=3)
                if args.random_rotate:
                    image = tf.py_func(facenet.random_rotate_image, [image], tf.uint8)
                if args.random_crop:
                    image = tf.random_crop(image, [args.image_size, args.image_size, 3])          
                else:
                    image = tf.image.resize_image_with_crop_or_pad(image, args.image_size, args.image_size)
                if args.random_flip:
                    image = tf.image.random_flip_left_right(image)

                #pylint: disable=no-member
                image.set_shape((args.image_size, args.image_size, 3))
                images.append(tf.image.per_image_standardization(image))
            images_and_labels.append([images, label])

        image_batch, label_batch = tf.train.batch_join(
            images_and_labels, batch_size=batch_size_placeholder, 
            shapes=[(args.image_size, args.image_size, 3), ()], enqueue_many=True,
            capacity=4 * nrof_preprocess_threads * args.batch_size,
            allow_smaller_final_batch=True)
ghost commented 6 years ago

@jerryhouuu thanks for sharing.

It looks like you're including several operations in your training pipeline that are not supported by TensorRT. The list of supported operations is documented in the TensorRT 3 Developer Guide.

If you are okay with using the TensorFlow runtime, you should be able to optimize your model using TensorFlow 1.7 with TensorRT integration. The tensorflow.contrib.tensorrt package includes the create_inference_graph method which will optimize your frozen inference graph, replacing TensorRT supported sub-graphs with optimized TensorRT engines. The unsupported operations will execute using plain TensorFlow. Below are some helpful resources

If you want to execute your model using only TensorRT (without the TensorFlow runtime), you will need to remove (or replace) any unsupported operations in your model before exporting to UFF. This would include removing the pre-processing steps in your training pipeline such as loading the image, cropping the image, randomly flipping the image etc.

Also please note that TensorRT is for inference, so once you have created an optimized model using either of the above two methods, the parameters may not be modified as during training.

Let me know if you have any more questions.

jerryhouuu commented 6 years ago

Hi, @jaybdub-nv, I tried to convert my model to uff then inference in tensorrt.

pb model: link uff model: link

I'm sure this pb model can worked normally in tensorflow.

pb_model='20180518-115854.pb'
G_LOGGER = trt.infer.ConsoleLogger(trt.infer.LogSeverity.ERROR)
uff_model = uff.from_tensorflow_frozen_model(pb_model, ["embeddings"])

parser = uffparser.create_uff_parser()
parser.register_input("input", (160,160,3), 0)
parser.register_output("embeddings")

engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, 1, 1 << 20, trt.infer.DataType.FLOAT)

The program produces the following error:

Using output node embeddings
Converting to UFF graph
Warning: keep_dims is not supported, ignoring...
DEBUG: convert reshape to flatten node
No. nodes: 1095
[TensorRT] ERROR: Parameter check failed at: Utils.cpp::reshapeWeights::71, condition: input.values != nullptr
[TensorRT] ERROR: UFFParser: Parser error: InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/FusedBatchNorm: reshape weights failed!
[TensorRT] ERROR: Failed to parse UFF model stream
  File "/usr/lib/python2.7/dist-packages/tensorrt/utils/_utils.py", line 191, in uff_to_trt_engine
    assert(parser.parse(stream, network, model_datatype))
Traceback (most recent call last):
  File "inference.py", line 40, in <module>
    engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, 1, 1 << 20, trt.infer.DataType.FLOAT)
  File "/usr/lib/python2.7/dist-packages/tensorrt/utils/_utils.py", line 199, in uff_to_trt_engine
    raise AssertionError('UFF parsing failed on line {} in statement {}'.format(line, text))
AssertionError: UFF parsing failed on line 191 in statement assert(parser.parse(stream, network, model_datatype))

Could any one provide me with additional information to solve this? Thank you

jerryhouuu commented 6 years ago

@jaybdub-nv hello, could you give me some advise about the above error, Thank you

xhzzhang commented 6 years ago

@jerryhouuu hello, I got same issue as below:

ERROR: Parameter check failed at: Utils.cpp::reshapeWeights::71, condition: input.values != nullptr ERROR: UFFParser: Parser error: res_aspp_g/decoder/resnet/bn_conv1/batch_normalization/FusedBatchNorm: reshape weights failed! ERROR: sample_uff_ssd: Fail to parse

did you have any idea for this? Thanks

xhzzhang commented 6 years ago

@jaybdub-nv

When I convert pb to uff, got below issue:

xhz@xhz-omen:tf_to_trt_image_classification$ python scripts/convert_plan.py /home/xhz/Projects/models/xw_model/new/LsGan_model_test.pb /home/xhz/Projects/models/xw_model/LsGan_model.pb.plan input_img 360 640 sigmoid_logits 1 0 float Using output node sigmoid_logits Converting to UFF graph WARNING: The UFF converter currently only supports 2D dilated convolutions WARNING: The UFF converter currently only supports 2D dilated convolutions WARNING: The UFF converter currently only supports 2D dilated convolutions WARNING: The UFF converter currently only supports 2D dilated convolutions WARNING: The UFF converter currently only supports 2D dilated convolutions WARNING: The UFF converter currently only supports 2D dilated convolutions WARNING: The UFF converter currently only supports 2D dilated convolutions WARNING: The UFF converter currently only supports 2D dilated convolutions WARNING: The UFF converter currently only supports 2D dilated convolutions No. nodes: 570 UFF Output written to data/tmp.uff UFFParser: parsing input_img UFFParser: parsing res_aspp_g/decoder/resnet/conv1/weights UFFParser: parsing res_aspp_g/decoder/resnet/conv1/conv1 UFFParser: Convolution: add Padding Layer to support asymmetric padding UFFParser: Convolution: Left: 2 UFFParser: Convolution: Right: 3 UFFParser: Convolution: Top: 2 UFFParser: Convolution: Bottom: 3 UFFParser: parsing res_aspp_g/decoder/resnet/bn_conv1/BatchNorm/Const UFFParser: parsing res_aspp_g/decoder/resnet/bn_conv1/BatchNorm/beta UFFParser: parsing res_aspp_g/decoder/resnet/bn_conv1/BatchNorm/Const_1 UFFParser: parsing res_aspp_g/decoder/resnet/bn_conv1/BatchNorm/Const_2 UFFParser: parsing res_aspp_g/decoder/resnet/bn_conv1/BatchNorm/FusedBatchNorm Parameter check failed at: Utils.cpp::reshapeWeights::71, condition: input.values != nullptr UFFParser: Parser error: res_aspp_g/decoder/resnet/bn_conv1/BatchNorm/FusedBatchNorm: reshape weights failed! Failed to parse UFF

Karthik777 commented 6 years ago

same issue here. [TensorRT] INFO: UFFParser: parsing Placeholder [TensorRT] INFO: UFFParser: parsing conv0/weights [TensorRT] INFO: UFFParser: parsing conv0/conv0/Conv2D [TensorRT] INFO: UFFParser: parsing conv0/biases [TensorRT] INFO: UFFParser: parsing conv0/conv0/BiasAdd [TensorRT] INFO: UFFParser: parsing conv0/conv0/Relu [TensorRT] INFO: UFFParser: parsing MaxPool2D/MaxPool [TensorRT] INFO: UFFParser: Pooling: add Padding Layer to support asymmetric padding [TensorRT] INFO: UFFParser: Pooling: Left: 0 [TensorRT] INFO: UFFParser: Pooling: Right: 1 [TensorRT] INFO: UFFParser: Pooling: Top: 0 [TensorRT] INFO: UFFParser: Pooling: Bottom: 1 [TensorRT] INFO: UFFParser: parsing dense_0/dense_0_bottleN_0/BatchNorm/Const [TensorRT] INFO: UFFParser: parsing BatchNorm/beta [TensorRT] INFO: UFFParser: parsing dense_0/dense_0_bottleN_0/BatchNorm/Const_1 [TensorRT] INFO: UFFParser: parsing dense_0/dense_0_bottleN_0/BatchNorm/Const_2 [TensorRT] INFO: UFFParser: parsing dense_0/dense_0_bottleN_0/BatchNorm/FusedBatchNorm [TensorRT] ERROR: Parameter check failed at: Utils.cpp::reshapeWeights::71, condition: input.values != nullptr [TensorRT] ERROR: UFFParser: Parser error: dense_0/dense_0_bottleN_0/BatchNorm/FusedBatchNorm: reshape weights failed! [TensorRT] ERROR: Failed to parse UFF model stream

PapaMadeleine2022 commented 5 years ago

same error with FusedBatchNorm. how to fix it?

PapaMadeleine2022 commented 5 years ago

@jaybdub-nv can you give us some advises?

janchk commented 5 years ago

Same error.

deaffella commented 4 years ago

hello, i want to use tensorrt to speed up the facenet project. davidsandberg/facenet

I freeze the model into .pb tensorflow modle, but I am confused about how can I convert it into .plan. and, I tried to modify the code similar to the model zoo' inception-resnet-v2 and re-train, but it could not be convert into an .uff model too.

Hi! I want to convert facenet model to trt but it seems unreacheble. Have you resolve this problem?

250zhanghu commented 3 years ago

@jerryhouuu thanks for sharing.

thanks for your advice. and i will try it