Closed choran closed 6 years ago
Hi Cathal, manipulating graph defs manually is highly discouraged.
You can try to inspect the graph nodes and guess the input from the nodes names or you can try to print the original tensor name from the code generating that graph. I am sorry but there is not much more we can do to help.
HI @andresusanopinto, Thanks for the response. I hope you dont mind me following up here this.
I just wanted to check in case there was a mis-understanding. I dont want to manipulate the graph def. I trained a classifier on the universal sentence encoder module as outlined here. In the advanced section they noted that you can set the trainable parameter to True in which case it will back-propagate (i am guessing?) the information in order to have access to the weights of the original model.
When this completed I wanted to check if there was a difference in generating embeddings using the module now that I had retrained it and set this parameter to retrain the original module. What I wanted to do was simply restore the saved model I had retrained. I could not figure out how to do this. Is this considered manipulating the graph def? I didnt think it was, so maybe I was looking at the wrong example which I linked in the previous comment which must have been manipulating the graph.
Is is possible to restore a module after I retrained it? I was not sure how to link to it in my code since I keep linking back to the original module as outlined in the tensorflow hub docs. I hope that makes a little more sense. If this is still the same as before apologies. Thanks Cathal
Hi, I am trying to create a module from an existing graph as described here. The problem is I dont know how to get the input and return tensor names. I keep getting errors like: ValueError: Requested return tensor 'report_uninitialized_variables:0' not found in graph def
How can I know the names of the inputs and outputs for a module so i can reload it? Thanks Cathal