onnx / onnx-tensorflow

Tensorflow Backend for ONNX
Other
1.28k stars 297 forks source link

py_func issue for tensorflow serving #903

Closed jorie-peng closed 3 years ago

jorie-peng commented 3 years ago

when convert pytorch -> onnx -> tensorflow(.pb) -> tensorflow(Savemodel), there is issue in tensorflow_serving, I can load Saved model as normal, but when using tensorflow serving, it will fail, here is wrong information. onnx=1.7.0, onnx-tf=1.7.0(branch tf1.x), tensorflow=1.15.0

as I know, py_func can only be used in python, but tensorflow serving use c++, so it will have error, so the only solution is to remove py_func in tensorflow model. according to similar issue: #167 the solution in to change strict to false, but for my model, I can only convert success with strict=True

wrong information
……
2021-04-16 16:55:54.476039: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0
2021-04-16 16:55:54.476052: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N
2021-04-16 16:55:54.480318: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14109 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:af:00.0, compute capability: 7.5)
2021-04-16 16:55:54.628831: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:311] SavedModel load for tags { serve }; Status: fail. Took 2352396 microseconds.
2021-04-16 16:55:54.630854: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: wmtype version: 2} failed: Invalid argument: No OpKernel was registered to support Op 'PyFunc' used by {{node PyFunc}}with these attrs: [Tin=[DT_FLOAT, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_BOOL, DT_STRING, DT_BOOL], _output_shapes=[], Tout=[DT_FLOAT], token="pyfunc_0"]
Registered devices: [CPU, GPU]
Registered kernels:
  <no registered kernels>

         [[PyFunc]]

when I check my .pb model, it really has pyfunc op, how can I convert my pytorch model without pyfunc? Thanks! here is my convert model code:

print('load pytorch model')
from ResNeSt.torch import resnest50
smodel = "ResNeSt/model/resnest50-528c19ca.pth"
state_dict = torch.load(smodel,map_location='cpu')  
pytorchmodel = resnest50(pretrained=False)
pytorchmodel.load_state_dict(state_dict)
print('pytorch convert to onnx')
torch.onnx.export(pytorchmodel,               # model being run
                  x,                         # model input (or a tuple for multiple inputs)
                  "model/my_model.onnx",   # where to save the model (can be a file or file-like object)
                  export_params=True,        # store the trained parameter weights inside the model file
                  opset_version=10,          # the ONNX version to export the model to
                  do_constant_folding=True,  # whether to execute constant folding for optimization
                  input_names = ['input'],   # the model's input names
                  output_names = ['output'], # the model's output names
                  dynamic_axes={'input' : {0 : 'batch_size'},    # variable lenght axes
                                'output' : {0 : 'batch_size'}})
print('onnx convert to .pb')
model = onnx.load('model/my_model.onnx')
tf_rep = prepare(model,strict=True)
tf_rep.export_graph("model/tfmodel/my_model.pb")  # export the model

print('.pb convert to SaveModel')
from tensorflow.python.saved_model import utils as smutils
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import signature_def_utils
from tensorflow.python.saved_model import tag_constants
output_keys = tf_rep.outputs
input_keys = tf_rep.inputs
tf_dict = tf_rep.tensor_dict
input_tensors= {key: tf_dict[key] for key in input_keys}
output_tensors= {key: tf_dict[key] for key in output_keys}
  g = tf.Graph()
  sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
  graph_def = tf.GraphDef()

with g.as_default():
      with open("model/tfmodel/my_model.pb", "rb") as f:
            graph_def.ParseFromString(f.read())
            tf.import_graph_def(graph_def, name="")

          # name argument must explicitly be set to an empty string, otherwise
      # TF will prepend an `import` scope name on all operations
      # tf.import_graph_def(graph_def, name="")

      tensor_info_inputs = {name: smutils.build_tensor_info(in_tensor)
                            for name, in_tensor in input_tensors.items()}

      tensor_info_outputs = {name: smutils.build_tensor_info(out_tensor)
                             for name, out_tensor in output_tensors.items()}

      prediction_signature = signature_def_utils.build_signature_def(
          inputs=tensor_info_inputs,
          outputs=tensor_info_outputs,
          method_name=signature_constants.PREDICT_METHOD_NAME)

      builder = tf.saved_model.builder.SavedModelBuilder(export_dir)
      builder.add_meta_graph_and_variables(
          sess, [tag_constants.SERVING],
          signature_def_map={"predict_images": prediction_signature})
      builder.save()

my model is resnest model,so will have maxpooling. ps: when convert onnx to .pb, I use strict=True according to official tutorial, and if change to strict=False, it will fail.

here is pytorch model file: https://s3.us-west-1.wasabisys.com/resnest/torch/resnest50-528c19ca.pth

jorie-peng commented 3 years ago

ok, I find the reason. because paramter ceil_mode in nn.AvgPool2d, when set it as True, the pb model would have pyfunc

fengxin619 commented 3 years ago

i use onnx=1.7.0, onnx-tf=1.7.0(branch tf1.x), tensorflow=1.15.0. but it also require me to install tensorflow-addons....... what's the problem?

i want to conver pth ->onnx -> pb -> savedmodel... how can i do?

chinhuang007 commented 3 years ago

The branch tf1.x is no longer actively maintained. It will work as-is, producing a pb, not a SavedModel. Upgrade to TF 2 is the easiest path to getting an ONNX model to a TF SavedModel.

MiracleMob commented 2 years ago

hi, did you get the variables file in the saved model dir