kwotsin / TensorFlow-ENet

TensorFlow implementation of ENet
MIT License
257 stars 123 forks source link

Adding the input output node to freeze the graph for only inference purpose #8

Closed chandrakantkhandelwal closed 7 years ago

chandrakantkhandelwal commented 7 years ago

I was trying to freeze the graph however you are using tensorflow input pipeline instead of a placeholder. Could you please explain, how to remove input pipeline and add a node for reading input image?

Thanks!

chandrakantkhandelwal commented 7 years ago

Got the answer in the closed issue #5

harsh-agar commented 6 years ago

Hi @chandrakantkhandelwal Can you help me as to what is the 'output_node_names' here. I think it is ['ENet/logits_to_softmax'] But I'm not getting the correct answer.

Thanks

chandrakantkhandelwal commented 6 years ago

@harsh-agar I have used the same name for output node and it's working. However the best way to verify is by printing all the nodes of the graph and confirm it.

harsh-agar commented 6 years ago

I tried printing it using the method mentioned in #5, and pasted those lines in the train script but it's printing many layers which I'm unable to interpret. Though I'm able to convert it into a frozen graph, using this as 'output_node_name' but I get an error when I try to convert it into .uff file for running it on jetson TX2 using TensorRT.

Thanks

chandrakantkhandelwal commented 6 years ago

@harsh-agar I guess TensorRT doesn't have Python API support on TX1/TX2. Also there are many tensorflow ops not yet supported by TensorRT. If you could post the error I can tell more about it.

harsh-agar commented 6 years ago

Yeah but I'm trying to convert it into .uff using my computer and then wish to export it to Jetson and run it on TensorRT using some method (which is yet to be figured out)

this is the error I get when using uff.from_tensorflow

Using output node ENet_1/logits_to_softmax Converting to UFF graph Traceback (most recent call last): File "freeze_graph.py", line 410, in run_main() File "freeze_graph.py", line 407, in run_main app.run(main=my_main, argv=[sys.argv[0]] + unparsed) File "/usr/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "freeze_graph.py", line 406, in my_main = lambda unused_args: main(unused_args, flags) File "freeze_graph.py", line 300, in main flags.saved_model_tags, checkpoint_version) File "freeze_graph.py", line 282, in freeze_graph checkpoint_version=checkpoint_version) File "freeze_graph.py", line 180, in freeze_graph_with_def_protos uff_model = uff.from_tensorflow(tf_model,["ENet_1/logits_to_softmax"]) File "/usr/local/lib/python2.7/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 75, in from_tensorflow name="main") File "/usr/local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 64, in convert_tf2uff_graph uff_graph, input_replacements) File "/usr/local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 42, in convert_tf2uff_node tf_node = tf_nodes[name] KeyError: 'ENet_1/logits_to_softmax'

I've added the conversion script to the freeze_graph.py tensorflow script

chandrakantkhandelwal commented 6 years ago

why the node name is 'ENet_1/logits_to_softmax' ? I suppose it was 'ENet/logits_to_softmax' I couldn't infer more after looking at this error. If you could share the freeze_graph.py (on your git or could mail it to me ck.iiitdm@gmail.com) I can try it on my computer using the enet model with me (trained using the same code).

Following links might help: 1) For converting TF model to TRT model: https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/topics/topics/workflows/tf_to_tensorrt.html 2) For porting TF model on TX2 (I think what you have said works!, though it has some more details) https://devtalk.nvidia.com/default/topic/1030437/jetson-tx2/deploy-tensorflow-model-on-tx2-with-tensorrt/

harsh-agar commented 6 years ago

Oh I tried it with 'ENet/logits_to_softmax' as well and got the same error there. Was just trying out something. And sure I'll send you the freeze_graph.py file that I'm trying.

Thanks

harsh-agar commented 6 years ago

please check out this code: https://github.com/harsh-agar/E-Net/blob/4aeca7711539a89d0991a918005b5a1413dbeb3d/freeze_graph.py#L171 I made changes on line 171-173.

This is the command used for running it: python freeze_graph.py --input_graph=../TensorFlow-ENet/checkpoint/graph.pbtxt --input_checkpoint=../TensorFlow-ENet/checkpoint/model.ckpt-13800 --output_graph=frozen_enet.pb --output_node_names='ENet/logits_to_softmax' --restore_op_name=save/restore_all --clear_devices

chandrakantkhandelwal commented 6 years ago

Cool, I will take a look at it. Please expect some delay in response.

harsh-agar commented 6 years ago

Yeah sure, will be awaiting your reply

chandrakantkhandelwal commented 6 years ago

Hi @harsh-agar, I have taken a look at your code, was too lengthy. Attaching my code file path for freezing the graph and then creating a uff parser for it. It takes model checkpoint as input. It freezes the model properly but while converting TF model to UFF it gives an error related to an unsupported layer. I didn't try much to solve this error.

https://github.com/chandrakantkhandelwal/PracticeCodes/blob/master/uff_parser_enet.py

You will see that it gives following error:

Converting as custom op Slice ENet/Slice name: "ENet/Slice" op: "Slice" input: "ENet/Shape_1" input: "ENet/Slice/begin" input: "ENet/Slice/size" attr { key: "Index" value { type: DT_INT32 } } attr { key: "T" value { type: DT_INT32 } }

harsh-agar commented 6 years ago

Thanks a lot for the script @chandrakantkhandelwal

Have you found some (preferably easy) way to write custom 'Slice' layer for TensorRT

Also, have you been able to get any Object-detection or Semantic-segmentation models to work on TensorRT? I could only get MobileNet classification model to work on TensorRT 3.0.4

chandrakantkhandelwal commented 6 years ago

@harsh-agar I haven't tried any custom layer implementation for tensorflow models. Impemented most of the models using c++ APIs in tensorrt. Have a look at this git repo, it has detection/segmentation examples using tensorrt, with custom plugin examples too: https://github.com/dusty-nv/jetson-inference

good luck!

harsh-agar commented 6 years ago

Yeah @chandrakantkhandelwal, I saw this repo but they implemented it using DIGITS server (with Caffe backend) which won't be of much help to me. Have you been able to convert any Tensorflow model trained into TensorRT? Also, which models did you implement?

chandrakantkhandelwal commented 6 years ago

I did convert detection/segmentation models at work, therefore cannot share the implementation details. Digits is just one way of doing it, if you are comfortable with TensroRT c++ APIs, then your training framework is not a bottleneck. Otherwise I would suggest you to start using caffe (I guess Digits examples in TensorRT are also based on caffe) , as TensorRT has good support for caffe layers.

harsh-agar commented 6 years ago

Thanks @chandrakantkhandelwal, will just look into this.