Open lisamarion opened 5 years ago
I think that savedmodel format is mainly used for TF-Serving. For ordinary inference, you might need the convert the Keras H5 file to a .pb file then load the pb file in tensorflowsharp.
Yes, I am able to do regular inference by loading a frozen graph .pb file as shown in the Examples. However, since TensorFlowSharp provides a function to load the SavedModel format into a session, it should also be possible to run the model once it has been loaded. From the error message it seems to me that the variables are not being initialized successfully, and I am wondering if there is an additional step required to do that which is not needed when loading from a frozen graph representation.
If anyone has successfully run inference using a SavedModel, please let me know if I am on the right track.
I'm using SavedModel format in Tensorflowsharp extensively. You are completely on the right path. I would suggest checking your signature_def_map generated by signature_def_utils.build_signature_def function and passed as a parameter to add_meta_graph_and_variables method when exporting keras h5 model. Check the names of input and output operations and use the same names in tensorflowsharp.
I am trying to implement a MobileNets architecture for inference. It was generated in keras and saved using python in a saved_model format. I was able to successfully load the model and set up the runner with the below code, but I get an exception when I try to runner.Run():
The error I get is:
Is there some additional step required to initialize variables when loading a saved_model? Running the same architecture saved as a frozen graph representation with File.ReadAllBytes() and graph.Import() does not give me this error, but I would strongly prefer to use the saved_model format since it seems that is what TensorFlow supports for serving.