migueldeicaza / TensorFlowSharp

TensorFlow API for .NET languages
MIT License
3.14k stars 578 forks source link

SavedModel format loads successfully but cannot run #416

Open lisamarion opened 5 years ago

lisamarion commented 5 years ago

I am trying to implement a MobileNets architecture for inference. It was generated in keras and saved using python in a saved_model format. I was able to successfully load the model and set up the runner with the below code, but I get an exception when I try to runner.Run():

using TensorFlow;
using System.IO;

var graph = new TFGraph();
using (var session = new TFSession(graph))
using (var sessOptions = new TFSessionOptions())
using (var metaGraph = new TFBuffer())
{
  session.FromSavedModel(sessOptions, null, @"saved_model_dir", new[]{"serve"}, graph, metaGraph);
  var tensor = CreateTensorFromImageFile(@"image.jpg");    
  var runner = session.GetRunner();
  runner.AddInput(graph["input_name"][0], tensor).Fetch(graph["output_name"][0]);
  var output = runner.Run();
}

The error I get is:

TensorFlow.TFException HResult=0x80131500 Message=Error while reading resource variable conv1/kernel from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/conv1/kernel) [[{{node conv1/Conv2D/ReadVariableOp}} = ReadVariableOpdtype=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]] Source=TensorFlowSharp StackTrace: at TensorFlow.TFStatus.CheckMaybeRaise(TFStatus incomingStatus, Boolean last) at TensorFlow.TFSession.Run(TFOutput[] inputs, TFTensor[] inputValues, TFOutput[] outputs, TFOperation[] targetOpers, TFBuffer runMetadata, TFBuffer runOptions, TFStatus status) at TensorFlow.TFSession.Runner.Run(TFStatus status)

Is there some additional step required to initialize variables when loading a saved_model? Running the same architecture saved as a frozen graph representation with File.ReadAllBytes() and graph.Import() does not give me this error, but I would strongly prefer to use the saved_model format since it seems that is what TensorFlow supports for serving.

captainst commented 5 years ago

I think that savedmodel format is mainly used for TF-Serving. For ordinary inference, you might need the convert the Keras H5 file to a .pb file then load the pb file in tensorflowsharp.

lisamarion commented 5 years ago

Yes, I am able to do regular inference by loading a frozen graph .pb file as shown in the Examples. However, since TensorFlowSharp provides a function to load the SavedModel format into a session, it should also be possible to run the model once it has been loaded. From the error message it seems to me that the variables are not being initialized successfully, and I am wondering if there is an additional step required to do that which is not needed when loading from a frozen graph representation.

If anyone has successfully run inference using a SavedModel, please let me know if I am on the right track.

DmitryGeyzersky commented 5 years ago

I'm using SavedModel format in Tensorflowsharp extensively. You are completely on the right path. I would suggest checking your signature_def_map generated by signature_def_utils.build_signature_def function and passed as a parameter to add_meta_graph_and_variables method when exporting keras h5 model. Check the names of input and output operations and use the same names in tensorflowsharp.