Closed chandrakantkhandelwal closed 7 years ago
Got the answer in the closed issue #5
Hi @chandrakantkhandelwal Can you help me as to what is the 'output_node_names' here. I think it is ['ENet/logits_to_softmax'] But I'm not getting the correct answer.
Thanks
@harsh-agar I have used the same name for output node and it's working. However the best way to verify is by printing all the nodes of the graph and confirm it.
I tried printing it using the method mentioned in #5, and pasted those lines in the train script but it's printing many layers which I'm unable to interpret. Though I'm able to convert it into a frozen graph, using this as 'output_node_name' but I get an error when I try to convert it into .uff file for running it on jetson TX2 using TensorRT.
Thanks
@harsh-agar I guess TensorRT doesn't have Python API support on TX1/TX2. Also there are many tensorflow ops not yet supported by TensorRT. If you could post the error I can tell more about it.
Yeah but I'm trying to convert it into .uff using my computer and then wish to export it to Jetson and run it on TensorRT using some method (which is yet to be figured out)
this is the error I get when using uff.from_tensorflow
Using output node ENet_1/logits_to_softmax
Converting to UFF graph
Traceback (most recent call last):
File "freeze_graph.py", line 410, in
I've added the conversion script to the freeze_graph.py tensorflow script
why the node name is 'ENet_1/logits_to_softmax' ? I suppose it was 'ENet/logits_to_softmax' I couldn't infer more after looking at this error. If you could share the freeze_graph.py (on your git or could mail it to me ck.iiitdm@gmail.com) I can try it on my computer using the enet model with me (trained using the same code).
Following links might help: 1) For converting TF model to TRT model: https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/topics/topics/workflows/tf_to_tensorrt.html 2) For porting TF model on TX2 (I think what you have said works!, though it has some more details) https://devtalk.nvidia.com/default/topic/1030437/jetson-tx2/deploy-tensorflow-model-on-tx2-with-tensorrt/
Oh I tried it with 'ENet/logits_to_softmax' as well and got the same error there. Was just trying out something. And sure I'll send you the freeze_graph.py file that I'm trying.
Thanks
please check out this code: https://github.com/harsh-agar/E-Net/blob/4aeca7711539a89d0991a918005b5a1413dbeb3d/freeze_graph.py#L171 I made changes on line 171-173.
This is the command used for running it: python freeze_graph.py --input_graph=../TensorFlow-ENet/checkpoint/graph.pbtxt --input_checkpoint=../TensorFlow-ENet/checkpoint/model.ckpt-13800 --output_graph=frozen_enet.pb --output_node_names='ENet/logits_to_softmax' --restore_op_name=save/restore_all --clear_devices
Cool, I will take a look at it. Please expect some delay in response.
Yeah sure, will be awaiting your reply
Hi @harsh-agar, I have taken a look at your code, was too lengthy. Attaching my code file path for freezing the graph and then creating a uff parser for it. It takes model checkpoint as input. It freezes the model properly but while converting TF model to UFF it gives an error related to an unsupported layer. I didn't try much to solve this error.
https://github.com/chandrakantkhandelwal/PracticeCodes/blob/master/uff_parser_enet.py
You will see that it gives following error:
Converting as custom op Slice ENet/Slice name: "ENet/Slice" op: "Slice" input: "ENet/Shape_1" input: "ENet/Slice/begin" input: "ENet/Slice/size" attr { key: "Index" value { type: DT_INT32 } } attr { key: "T" value { type: DT_INT32 } }
Thanks a lot for the script @chandrakantkhandelwal
Have you found some (preferably easy) way to write custom 'Slice' layer for TensorRT
Also, have you been able to get any Object-detection or Semantic-segmentation models to work on TensorRT? I could only get MobileNet classification model to work on TensorRT 3.0.4
@harsh-agar I haven't tried any custom layer implementation for tensorflow models. Impemented most of the models using c++ APIs in tensorrt. Have a look at this git repo, it has detection/segmentation examples using tensorrt, with custom plugin examples too: https://github.com/dusty-nv/jetson-inference
good luck!
Yeah @chandrakantkhandelwal, I saw this repo but they implemented it using DIGITS server (with Caffe backend) which won't be of much help to me. Have you been able to convert any Tensorflow model trained into TensorRT? Also, which models did you implement?
I did convert detection/segmentation models at work, therefore cannot share the implementation details. Digits is just one way of doing it, if you are comfortable with TensroRT c++ APIs, then your training framework is not a bottleneck. Otherwise I would suggest you to start using caffe (I guess Digits examples in TensorRT are also based on caffe) , as TensorRT has good support for caffe layers.
Thanks @chandrakantkhandelwal, will just look into this.
I was trying to freeze the graph however you are using tensorflow input pipeline instead of a placeholder. Could you please explain, how to remove input pipeline and add a node for reading input image?
Thanks!