tensorflow / serving

A flexible, high-performance serving system for machine learning models
https://www.tensorflow.org/serving
Apache License 2.0
6.17k stars 2.19k forks source link

How to use the classify Restful API? #994

Closed walton-wang929 closed 5 years ago

walton-wang929 commented 6 years ago

hi guys,

recently, i am very confused about using tensorflow serving restful api. I trained a image classfiction model, and exported according to TF serving saved model, deployed successfully. but met big problem in calling restful api.

if you have experiences about it, welcome to leave your messages. thank you very much.

elvys-zhang commented 6 years ago

Your problem is?

walton-wang929 commented 6 years ago

my problem solved. @elvys-zhang , the problem was when I export TF serving saved model, i cannot call restful api. coz the official document about restful is not good. they just provide grpc example. Later I will organize my problem, and write a post. will attach here. thank you very much.

elvys-zhang commented 6 years ago

Fine, looking forward to your post.

netfs commented 6 years ago

curious to hear what did not work in the official doc here:

https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/api_rest.md#example

happy to accept patches to improve the documentation (so everyone benefits).

thanks!

walton-wang929 commented 6 years ago

Yep. I will write documents here later.

netfs notifications@github.com于2018年7月21日 周六上午9:30写道:

curious to hear what did not work in the official doc here:

https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/api_rest.md#example

happy to accept patches to improve the documentation (so everyone benefits).

thanks!

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/tensorflow/serving/issues/994#issuecomment-406761051, or mute the thread https://github.com/notifications/unsubscribe-auth/AiQgsoPh3SpzYpTpCcPQdbtl0KWol3Hmks5uIoQ6gaJpZM4VQ7OT .

xujie32 commented 6 years ago

Hi Walton,

I am confused about how to classify an image using curl tool to make REST API calls. Specifically, I don't know how to represent an image in a request. I don't think it'll be fine to directly use '{"instances": [xxx.jpg, yyy.jpg]}', and I am also confused about the encoding binary values.

It would be nice if you can share an example about it. Thanks in advance.

prashanthbasani commented 6 years ago

@xujie32 Assuming you are using a Linux box for executing cURL commands. Please replace text in "<>" with whatever you want to. Note: I did not test the command but it should mostly look like this. At least you will get an idea.

curl -X POST \ http://${DOCKER_HOST}:8501/v1/models/\/versions/\:classify \ -d '{"signature_name": , "examples": [ { "image": { "b64": "<use a converted base 64 encoding image> or $( base64 )" }, } ] }'

xujie32 commented 6 years ago

Hi Walton,

Thanks for replying.

Now I know how to organize a request command, but how do I specify the input when exporting the model. I used to follow the inception_save_model.py. But since we are using a base64 encoded string now, I think there may be some changes. I am not sure though.

Thanks again.

walton-wang929 commented 6 years ago

@xujie32 , like the example below:

when you export TF serving model, you will define the input information,

serialized_tf_example= tf.placeholder(tf.string, name='tf_example')
    feature_configs = {
        'image/encoded': tf.FixedLenFeature(
            shape=[], dtype=tf.string),
    }
    tf_example = tf.parse_example(serialized_tf_example, feature_configs)

    jpegs = tf_example['image/encoded']

    image_string = tf.reshape(jpegs, shape=[])
.....

        # Export inference model.
        output_path = os.path.join(tf.compat.as_bytes(output_dir),tf.compat.as_bytes(str(model_version)))
        print ('Exporting trained model to', output_path)
        builder = tf.saved_model.builder.SavedModelBuilder(output_path)

        # Build the signature_def_map.
        classify_inputs_tensor_info = tf.saved_model.utils.build_tensor_info(serialized_tf_example)
        #classes_output_tensor_info = tf.saved_model.utils.build_tensor_info(classes)
        scores_output_tensor_info = tf.saved_model.utils.build_tensor_info(probabilities)

        classification_signature = (
              tf.saved_model.signature_def_utils.build_signature_def(
                  inputs={
                      tf.saved_model.signature_constants.CLASSIFY_INPUTS:
                          classify_inputs_tensor_info
                  },
                  outputs={
                      tf.saved_model.signature_constants.CLASSIFY_OUTPUT_SCORES:
                          scores_output_tensor_info
                  },
                  method_name=tf.saved_model.signature_constants.
                  CLASSIFY_METHOD_NAME))

        predict_inputs_tensor_info = tf.saved_model.utils.build_tensor_info(jpegs)
        prediction_signature = (
                tf.saved_model.signature_def_utils.build_signature_def(
                        inputs={'images': predict_inputs_tensor_info},
                        outputs={'scores': scores_output_tensor_info
              },
                        method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
          ))

        legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
        builder.add_meta_graph_and_variables(
                sess, [tf.saved_model.tag_constants.SERVING],
                signature_def_map={
                        'predict_images': prediction_signature,
                        tf.saved_model.signature_constants.
                        DEFAULT_SERVING_SIGNATURE_DEF_KEY:classification_signature,},
                legacy_init_op=legacy_init_op)

        builder.save()
        print ('Successfully exported model to %s' % output_dir)

then you can request to your server using code like this:

import` requests
import base64
import json

image = r"./test2.jpg"
URL = "http://{HOST:port}/v1/models/<modelname>/versions/1:classify" 
headers = {"content-type": "application/json"}
image_content = base64.b64encode(open(image,'rb').read()).decode("utf-8")
body = {
    "signature_name": "serving_default",
    "examples": [
                {"image/encoded": {"b64":image_content}}
                ]
    }
r = requests.post(URL, data=json.dumps(body), headers = headers) 
print(r.text)

or this :+1:

curl -X POST 
http://{HOST:port}/v1/models/<modelname>/versions/<version>:classify 
-d '{"signature_name": <signature_name>,
"examples": [
{
"image": { "b64": "<use a converted base 64 encoding image> or $( base64 <image_path> )" },
}
]
}'

hopefully it can help you.

enricorotundo commented 6 years ago

The 'Encoding binary values' example show in the docs seems wrong. Version 1.9 requires "image/encoded" key instead of "image" as shown in the example below:

{
  "signature_name": "classify_objects",
  "examples": [
    {
      "image": { "b64": "aW1hZ2UgYnl0ZXM=" },
      "caption": "seaside"
    },
    {
      "image": { "b64": "YXdlc29tZSBpbWFnZSBieXRlcw==" },
      "caption": "mountains"
    }
  }
}
netfs commented 6 years ago

the "key" part in the request ("image" or "image/encoded" or anything else) is dependent on the model being served. the API does not impose any specific name. it depends on the name that the model author/owner has chosen for a given input.

the example in docs is only for illustrative purposes. the important aspect is tagging binary blobs with "b64" key and choosing a base64 encoding.

xujie32 commented 6 years ago

Thank you guys,

I'll try it later and let you know.

Thanks again.

phanchutoan commented 5 years ago

@xujie32 , like the example below:

when you export TF serving model, you will define the input information,

serialized_tf_example= tf.placeholder(tf.string, name='tf_example')
    feature_configs = {
        'image/encoded': tf.FixedLenFeature(
            shape=[], dtype=tf.string),
    }
    tf_example = tf.parse_example(serialized_tf_example, feature_configs)

    jpegs = tf_example['image/encoded']

    image_string = tf.reshape(jpegs, shape=[])
.....

        # Export inference model.
        output_path = os.path.join(tf.compat.as_bytes(output_dir),tf.compat.as_bytes(str(model_version)))
        print ('Exporting trained model to', output_path)
        builder = tf.saved_model.builder.SavedModelBuilder(output_path)

        # Build the signature_def_map.
        classify_inputs_tensor_info = tf.saved_model.utils.build_tensor_info(serialized_tf_example)
        #classes_output_tensor_info = tf.saved_model.utils.build_tensor_info(classes)
        scores_output_tensor_info = tf.saved_model.utils.build_tensor_info(probabilities)

        classification_signature = (
              tf.saved_model.signature_def_utils.build_signature_def(
                  inputs={
                      tf.saved_model.signature_constants.CLASSIFY_INPUTS:
                          classify_inputs_tensor_info
                  },
                  outputs={
                      tf.saved_model.signature_constants.CLASSIFY_OUTPUT_SCORES:
                          scores_output_tensor_info
                  },
                  method_name=tf.saved_model.signature_constants.
                  CLASSIFY_METHOD_NAME))

        predict_inputs_tensor_info = tf.saved_model.utils.build_tensor_info(jpegs)
        prediction_signature = (
                tf.saved_model.signature_def_utils.build_signature_def(
                        inputs={'images': predict_inputs_tensor_info},
                        outputs={'scores': scores_output_tensor_info
              },
                        method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
          ))

        legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
        builder.add_meta_graph_and_variables(
                sess, [tf.saved_model.tag_constants.SERVING],
                signature_def_map={
                        'predict_images': prediction_signature,
                        tf.saved_model.signature_constants.
                        DEFAULT_SERVING_SIGNATURE_DEF_KEY:classification_signature,},
                legacy_init_op=legacy_init_op)

        builder.save()
        print ('Successfully exported model to %s' % output_dir)

then you can request to your server using code like this:

import` requests
import base64
import json

image = r"./test2.jpg"
URL = "http://{HOST:port}/v1/models/<modelname>/versions/1:classify" 
headers = {"content-type": "application/json"}
image_content = base64.b64encode(open(image,'rb').read()).decode("utf-8")
body = {
    "signature_name": "serving_default",
    "examples": [
                {"image/encoded": {"b64":image_content}}
                ]
    }
r = requests.post(URL, data=json.dumps(body), headers = headers) 
print(r.text)

or this 👍

curl -X POST 
http://{HOST:port}/v1/models/<modelname>/versions/<version>:classify 
-d '{"signature_name": <signature_name>,
"examples": [
{
"image": { "b64": "<use a converted base 64 encoding image> or $( base64 <image_path> )" },
}
]
}'

hopefully it can help you.

What should i put in the probabilities ?

xujie32 commented 5 years ago

@phachutoan Do you mean the probabilities in 'scores_output_tensor_info = tf.saved_model.utils.build_tensor_info(probabilities)'? It means how possible the input image should be classified to a certain category. Note that some information was omitted in the above example, I suggest you follow this file inception_save_model.py.

vijayalakshmi-ml commented 5 years ago

@walton-wang929 @xujie32

I used the below code to export keras model for tensorflow serving

 model = load_model('modelName.h5')

 image = tf.placeholder(shape=[None], dtype=tf.string)
 export_path = 'path-name'

 builder = saved_model_builder.SavedModelBuilder(export_path)

  signature = predict_signature_def(inputs={'image_bytes': image},
                          outputs={'scores': model.output})

  with K.get_session() as sess:
  builder.add_meta_graph_and_variables(sess=sess,
                                 tags=[tag_constants.SERVING],
                                 signature_def_map={
                                     'predict': signature})

   builder.save()        

Have passed the image data to convert it to json file so that it can be sent to cloud ML engine for prediction

 from base64 import b64encode
 from json import dumps
 import json

 with open(IMAGE_NAME, 'rb') as open_file:
 byte_content = open_file.read()
 base64_bytes = b64encode(byte_content)                      
 raw_data = {IMAGE_NAME: base64_bytes}
 request_body= json.dumps({'images': base64_bytes})
 with open('test_data.json', 'w') as outfile:
 outfile.write(request_body)

Then I pass this json (base64 encode of the image) to cloud ML engine

   !gcloud ml-engine predict --model model-name --version version_3 --json-instances test_data.json

i got the below error

   ['{', '  "error": "Prediction failed: Error during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details=\\"Tensor Placeholder_107:0, specified in either feed_devices or fetch_devices was not found in the Graph\\")"', '}']

Please help

kspook commented 5 years ago

I made a curl script about text-detection, https://github.com/eragonruan/text-detection-ctpn.git. However, I got an malformed error with curl script. Can anyone tell me the reason?

I checked information here, but I couldn't solve.
The information I ran curl script and output as below:

================= input===========

curl -X POST http://localhost:9001/v1/models/ctpn -d '{ "signature_name": "ctpn_recs_predict", "inputs": { "image": { "b64": "$(base64 /home/kspook/text-detection-ctpn/data/demo/006.jpg)" } } }'

==========output (error) ============== { "error": "Malformed request: POST /v1/models/ctpn" }

elvys-zhang commented 5 years ago

@kspook Your url should be like this


 http://localhost:9001/v1/models/ctpn:classify
kspook commented 5 years ago

@elvys-zhang , thank you for you comment.

this doesn't include classify method.

kspook commented 5 years ago

@elvys-zhang, thank you for your comment.
According to export script, method is 'predict'. so I changed

curl -X POST http://localhost:9001/v1/models/ctpn:predict -d { "signature_name": "ctpn_recs_predict", "inputs":{ "image":{"b64", "${base64 /home/kspook/text-detection/data/demo/006.jpg}" } }

but in this time I had different error. {"error": "JSON Value: {\n \"b64\":${base64 /home/kspook/text-detection/data/demo/006.jpg}\"\n} Type : Object is one of expected type : float"}

====== export script =====

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['ctpn_recs_predict']: The given SavedModel SignatureDef contains the following input(s): inputs['image'] tensor_info: dtype: DT_FLOAT shape: (-1, -1, -1, 3) name: Placeholder:0 The given SavedModel SignatureDef contains the following output(s): outputs['output_box_pred'] tensor_info: dtype: DT_FLOAT shape: (-1, -1, -1, 40) name: rpn_bbox_pred/Reshape_1:0 outputs['output_cls_prob'] tensor_info: dtype: DT_FLOAT shape: (-1, -1, -1, 20) name: Reshape_2:0 Method name is: tensorflow/serving/predict

@kspook Your url should be like this

 http://localhost:9001/v1/models/ctpn:classify
elvys-zhang commented 5 years ago

@elvys-zhang , thank you for you comment.

this doesn't include classify method.

What I mean is that you should specify your method after model name, like

http://host:port/v1/models/MODEL_NAME:METHOD

You have trained a model with predict method. So you should modify your url as


http://localhost:9001/v1/models/ctpn:predict
kspook commented 5 years ago

@elvys-zhang , thank you for quick response.

now, I have another problem.

{"error": "JSON Value: {\n "b64":${base64 /home/kspook/text-detection/data/demo/006.jpg}"\n} Type : Object is one of expected type : float"}

can you help me?

====== export script ===== from future import print_function import tensorflow as tf import numpy as np import os, sys, cv2 from tensorflow.python.platform import gfile import glob import shutil

dir_path = os.path.dirname(os.path.realpath(file)) sys.path.append(os.path.join(dir_path, '..'))

from lib.networks.factory import get_network from lib.fast_rcnn.config import cfg, cfg_from_file from lib.fast_rcnn.test import test_ctpn from lib.utils.timer import Timer from lib.text_connector.detectors import TextDetector from lib.text_connector.text_connect_cfg import Config as TextLineCfg

dir_path = os.path.dirname(os.path.realpath(file))

def export(): cfg_from_file(os.path.join(dir_path, 'text.yml')) config = tf.ConfigProto(allow_soft_placement=True) sess = tf.Session(config=config)

with gfile.FastGFile('../data/ctpn.pb', 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) sess.graph.as_default() tf.import_graph_def(graph_def, name='') sess.run(tf.global_variables_initializer()) net = get_network("VGGnet_test") print(('Loading network {:s}... '.format("VGGnet_test")), end=' ') saver = tf.train.Saver() print(saver) try: ckpt_path = os.path.abspath(os.path.join(dir_path, cfg.TEST.checkpoints_path)) ckpt = tf.train.get_checkpoint_state(ckpt_path) print('Restoring from {}...'.format(ckpt.model_checkpoint_path), end=' ') saver.restore(sess, ckpt.model_checkpoint_path) print('done') except: raise 'Check your pretrained {:s}'.format(ckpt.model_checkpoint_path) print(' done.')

input_img = sess.graph.get_tensor_by_name('Placeholder:0') output_cls_prob = sess.graph.get_tensor_by_name('Reshape_2:0') output_box_pred = sess.graph.get_tensor_by_name('rpn_bbox_pred/Reshape_1:0')

builder = tf.saved_model.builder.SavedModelBuilder('./export/1')

imageplaceholder_info = tf.saved_model.utils.build_tensor_info(input_img) cls_prob_info = tf.saved_model.utils.build_tensor_info(output_cls_prob) box_pred_info = tf.saved_model.utils.build_tensor_info(output_box_pred) prediction_signature = ( tf.saved_model.signature_def_utils.build_signature_def( inputs={ 'image': imageplaceholder_info }, outputs={ 'output_cls_prob': cls_prob_info, 'output_box_pred': box_pred_info }, method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME ) ) init_op = tf.group(tf.tables_initializer(), name='legacy_init_op') builder.add_meta_graph_and_variables(sess, [tf.saved_model.tag_constants.SERVING], signature_def_map={'ctpn_recs_predict': prediction_signature}, legacy_init_op=init_op) builder.save()

if name == 'main': export()

@elvys-zhang , thank you for you comment. this doesn't include classify method.

What I mean is that you should specify your method after model name, like

http://host:port/v1/models/MODEL_NAME:METHOD

You have trained a model with predict method. So you should modify your url as

http://localhost:9001/v1/models/ctpn:predict
elvys-zhang commented 5 years ago

@kspook b64 passes encoded Byte to server , but the type of your input is float. You could try to change the input of your model to string. Or you could send your image as float

image = cv2.imread("xxx.jpg", cv2.IMREAD_COLOR)
image = image.astype(np.float32) / 255
image = image.tolist()
headers = {"content-type": "application/json"}
body = {
        "signature_name": "ctpn_recs_predict",
        "inputs": [
           image 
           ]
        }
r = requests.post(url, data = json.dumps(body), headers = headers)

I have tried this way and it worked fine on my resnet model. Hope this helps.

kspook commented 5 years ago

@elvys-zhang
thank for your kinds respond. But I have same Object is the expected type : float

the output is as follows: .... [\n 0.054901961237192157,\n 0.054901961237192157,\n 0.054901961237192157\n ],\n [\n 0.05098039284348488,\n 0.05098039284348488,\n 0.05098039284348488\n ],\n [\n 0.04313725605607033,\n 0.05098039284348488,\n 0.05098039284348488\n ],\n [\n 0.03921568766236305,\n 0.0470588244497776,\n 0.0470588244497776\n ],\n [\n 0.03921568766236305,\n 0.0470588244497776,\n 0.0470588244497776\n ],\n [\n 0.03921568766236305,\n 0.0470588244497776,\n 0.0470588244497776\n ]\n ]\n ]\n }\n} Type: Object is not of expected type: float" }

@kspook b64 passes encoded Byte to server , but the type of your input is float. You could try to change the input of your model to string. Or you could send your image as float

image = cv2.imread("xxx.jpg", cv2.IMREAD_COLOR)
image = image.astype(np.float32) / 255
image = image.tolist()
headers = {"content-type": "application/json"}
body = {
        "signature_name": "ctpn_recs_predict",
        "inputs": [
           image 
           ]
        }
r = requests.post(url, data = json.dumps(body), headers = headers)

I have tried this way and it worked fine on my resnet model. Hope this helps.

kspook commented 5 years ago

@elvys-zhang, I think exporting would be wrong. So, I found different style exporting script whose owner claimed to be successful. But he didn't share all script.

can you make unshared part, "preprocess". https://github.com/eragonruan/text-detection-ctpn/issues/288#issuecomment-463930584

@hcnhatnam, i have successed deploy the CTPN in TensorFlow Serving, but i cann't share the all code ,sorry. my solution is implement the proposal_layer and textdetector.detect (demo_pb.py) function by tf-api (not python-numpy) . the implementation of original code is the numpy which cann't merge to the TensorFlow Serving model . the outputs of the CTPN model are output_tensor_cls_prob,output_tensor_box_pred. when i get these two temp-results, i conbine the all 16pixes for final detection boxes(postprocess function) . the proposal_layer_tf_api function is the implementation by tf-api which reference to the proposal_layer and textdetector.detect function.

def postprocess(cls_prob_tf, box_pred_tf , im_info_tf): boxes=proposal_layer_tf_api(cls_prob_tf, box_pred_tf, im_info_tf) return boxes

with tf.Session() as sess: with tf.gfile.GFile(args.model, "rb") as f: restored_graph_def = tf.GraphDef() restored_graph_def.ParseFromString(f.read()) tf.import_graph_def( restored_graph_def, input_map=None, return_elements=None, name="" )

打印节点信息

tensor_name_list = [tensor.name for tensor in tf.get_default_graph().as_graph_def().node] for tensor_name in tensor_name_list: print(tensor_name, '\n')

export_path_base = args.export_model_dir export_path = os.path.join(tf.compat.as_bytes(export_path_base), tf.compat.as_bytes(str(args.model_version))) print('Exporting trained model to', export_path) builder = tf.saved_model.builder.SavedModelBuilder(export_path)

raw_image = tf.placeholder(tf.float32, shape=[None, None, None, 3]) #输入原始图像 jpeg,im_info = preprocess_image(raw_image) #预处理,缩放

output_tensor_cls_prob,output_tensor_box_pred = tf.import_graph_def\ (tf.get_default_graph().as_graph_def(), input_map={'Placeholder:0': jpeg}, return_elements=['Reshape_2:0','rpn_bbox_pred/Reshape_1:0'])

tensor_info_input = tf.saved_model.utils.build_tensor_info(raw_image) tensor_info_output_cls_prob = tf.saved_model.utils.build_tensor_info(output_tensor_cls_prob) tensor_info_output_box_pred = tf.saved_model.utils.build_tensor_info(output_tensor_box_pred)

prediction_signature = ( tf.saved_model.signature_def_utils.build_signature_def( inputs={'images': tensor_info_input}, outputs={'cls_prob': tensor_info_output_cls_prob, 'box_pred': tensor_info_output_box_pred, }, method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME ))

im_info_output = tf.saved_model.utils.build_tensor_info(im_info)

print('\n') print('im_info_input tensor shape', im_info.shape)

ctpn后处理,合并宽度为16的boxes,得到最终的文本框

result_boxes=postprocess(output_tensor_cls_prob,output_tensor_box_pred,im_info)

-------根据检测的文本框裁剪图像,等比例缩放到高32,padding到同一大小----------

crop_resize_img,crop_resize_im_info = crop_resize_image(jpeg, result_boxes) output_crop_resize_img = tf.saved_model.utils.build_tensor_info(crop_resize_img) output_crop_resize_img_info = tf.saved_model.utils.build_tensor_info(crop_resize_im_info)

----------

tensor_info_output_boxes = tf.saved_model.utils.build_tensor_info(result_boxes)

prediction_post_signature = ( tf.saved_model.signature_def_utils.build_signature_def( inputs={'images': tensor_info_input}, outputs={'detection_boxes': tensor_info_output_boxes, 'resize_im_info':im_info_output, 'crop_resize_img': output_crop_resize_img, 'crop_resize_im_info': output_crop_resize_img_info,}, method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME ))

builder.add_meta_graph_and_variables( sess, [tf.saved_model.tag_constants.SERVING], signature_def_map={ 'predict_images':prediction_signature, 'predict_images_post': prediction_post_signature, })

builder.save(as_text=False) print('Done exporting!')

elvys-zhang commented 5 years ago

@kspook You could refer to this official example to do dtype convert.

kspook commented 5 years ago

@elvys-zhang, you are right. I made a mistake. Now I can run tensorflow serving model well. Thank you.

@elvys-zhang thank for your kinds respond. But I have same Object is the expected type : float

the output is as follows: .... [\n 0.054901961237192157,\n 0.054901961237192157,\n 0.054901961237192157\n ],\n [\n 0.05098039284348488,\n 0.05098039284348488,\n 0.05098039284348488\n ],\n [\n 0.04313725605607033,\n 0.05098039284348488,\n 0.05098039284348488\n ],\n [\n 0.03921568766236305,\n 0.0470588244497776,\n 0.0470588244497776\n ],\n [\n 0.03921568766236305,\n 0.0470588244497776,\n 0.0470588244497776\n ],\n [\n 0.03921568766236305,\n 0.0470588244497776,\n 0.0470588244497776\n ]\n ]\n ]\n }\n} Type: Object is not of expected type: float" }

@kspook b64 passes encoded Byte to server , but the type of your input is float. You could try to change the input of your model to string. Or you could send your image as float

image = cv2.imread("xxx.jpg", cv2.IMREAD_COLOR)
image = image.astype(np.float32) / 255
image = image.tolist()
headers = {"content-type": "application/json"}
body = {
        "signature_name": "ctpn_recs_predict",
        "inputs": [
           image 
           ]
        }
r = requests.post(url, data = json.dumps(body), headers = headers)

I have tried this way and it worked fine on my resnet model. Hope this helps.

kspook commented 5 years ago

@elvys-zhang, The model I really want to run is evanfly's exporting script like below. ( eragonruan/text-detection-ctpn#288 (comment) But he didn't share all script. So, I made an exporting script by referring to an alternative approach of @hiepph. (https://github.com/eragonruan/text-detection-ctpn/issues/134#issuecomment-403859585) but, there are a few problems. I just got only one text box information and I had "unknown error" message.

########################################

no.1 exporting script

########################################

from future import print_function import tensorflow as tf import numpy as np import os, sys, cv2 from tensorflow.python.platform import gfile import glob import shutil

dir_path = os.path.dirname(os.path.realpath(file)) sys.path.append(os.path.join(dir_path, '..'))

from lib.networks.factory import get_network from lib.fast_rcnn.config import cfg, cfg_from_file from lib.fast_rcnn.test import test_ctpn from lib.utils.timer import Timer from lib.text_connector.detectors import TextDetector from lib.text_connector.text_connect_cfg import Config as TextLineCfg from lib.fast_rcnn.test import _get_blobs from lib.rpn_msr.proposal_layer_tf import proposal_layer

dir_path = os.path.dirname(os.path.realpath(file))

def resize_im(im, scale, max_scale=None): f = float(scale) / min(im.shape[0], im.shape[1]) if max_scale != None and f * max(im.shape[0], im.shape[1]) > max_scale: f = float(max_scale) / max(im.shape[0], im.shape[1]) return cv2.resize(im, None, None, fx=f, fy=f, interpolation=cv2.INTER_LINEAR), f

def preprocess_image(image_buffer): """Preprocess JPEG encoded bytes to 3D float Tensor."""

# Decode the string as an RGB JPEG.
# Note that the resulting image contains an unknown height and width
# that is set dynamically by decode_jpeg. In other words, the height
# and width of image is unknown at compile-time.

image = tf.image.decode_image(image_buffer, channels=3)
image.set_shape([256, 256, 256,3])

# self.img_pl = tf.placeholder(tf.string, name='input_image_as_bytes')
# After this point, all image pixels reside in [0,1)
# until the very end, when they're rescaled to (-1, 1).  The various
# adjust_* ops all require this range for dtype float.
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
# Crop the central region of the image with an area containing 87.5% of
# the original image.
image = tf.image.central_crop(image, central_fraction=0.875)

image = tf.expand_dims(image, 0)
image = tf.squeeze(image, [0])
# Finally, rescale to [-1,1] instead of [0, 1)
image = tf.subtract(image, 0.5)
image = tf.multiply(image, 2.0)
return image

def query_ctpn(sess, cv2img): """Args: sess: tensorflow session cfg: CTPN config img: numpy array image

Returns: A list of detected bounding boxes, each bounding box have followed coordinates: [(xmin, ymin), (xmax, ymax)] (xmin, ymin) ------------- | | ---------------- (xmax, ymax) """

Specify input/output

input_img  = sess.graph.get_tensor_by_name('Placeholder:0')
output_cls_box = sess.graph.get_tensor_by_name('Reshape_2:0')
output_box_pred = sess.graph.get_tensor_by_name('rpn_bbox_pred/Reshape_1:0')
#print('query_pb : img, ',  img)

img, scale = resize_im(cv2img, scale=TextLineCfg.SCALE, max_scale=TextLineCfg.MAX_SCALE)
blobs, im_scales = _get_blobs(img, None)
if cfg.TEST.HAS_RPN:
    im_blob = blobs['data']
    blobs['im_info'] = np.array([[im_blob.shape[1], im_blob.shape[2], im_scales[0]]],
                                dtype=np.float32)
    cls_prob, box_pred = sess.run([output_cls_box, output_box_pred],
                                  feed_dict={input_img: blobs['data']})
    print('box_pred, ',  box_pred )
    rois, _ = proposal_layer(cls_prob, box_pred, blobs['im_info'],
                             'TEST', anchor_scales=cfg.ANCHOR_SCALES)
    scores = rois[:, 0]
    boxes = rois[:, 1:5] / im_scales[0]
    textdetector = TextDetector()
    boxes = textdetector.detect(boxes, scores[:, np.newaxis], img.shape[:2])

    # Convert boxes to bounding rectangles
    rects = []
    for box in boxes:
        min_x = min(int(box[0]/scale), int(box[2]/scale), int(box[4]/scale), int(box[6]/scale))
        min_y = min(int(box[1]/scale), int(box[3]/scale), int(box[5]/scale), int(box[7]/scale))
        max_x = max(int(box[0]/scale), int(box[2]/scale), int(box[4]/scale), int(box[6]/scale))
        max_y = max(int(box[1]/scale), int(box[3]/scale), int(box[5]/scale), int(box[7]/scale))

    rects.append([(min_x, min_y), (max_x, max_y)])
    print('rects.append, ', rects)
    return rects

def export(): ''' No 1 Sess outf of 2 : ctpn_sess ''' cfg_from_file(os.path.join(dir_path, 'text_post.yml')) config = tf.ConfigProto(allow_soft_placement=True) ctpn_sess = tf.Session(config=config) with ctpn_sess.as_default(): with tf.gfile.FastGFile('../data/ctpn.pb', 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) ctpn_sess.graph.as_default() tf.import_graph_def(graph_def, name='') ctpn_sess.run(tf.global_variables_initializer()) cv2img = cv2.imread("../data/demo/006.jpg", cv2.IMREAD_COLOR) result_boxes=query_ctpn(ctpn_sess, cv2img) print('Creating boxes done') ''' No 2 Sess outf of 2:sess ''' with tf.Session() as sess: with gfile.FastGFile('../data/ctpn.pb', 'rb') as f: restored_graph_def = tf.GraphDef() restored_graph_def.ParseFromString(f.read()) tf.import_graph_def( restored_graph_def, input_map=None, return_elements=None, name="" )

''' export_path_base = args.export_model_dir export_path = os.path.join(tf.compat.as_bytes(export_path_base), tf.compat.as_bytes(str(args.model_version))) ''' builder = tf.saved_model.builder.SavedModelBuilder('../exportPo/1')

print('Exporting trained model to', export_path)

print('Exporting trained model ')

raw_image = tf.placeholder(tf.string, name='tf_box') feature_configs = { 'image/encoded': tf.FixedLenFeature( shape=[], dtype=tf.string), } tf_example = tf.parse_example(raw_image , feature_configs)

jpegs = tf_example['image/encoded'] image_string = tf.reshape(jpegs, shape=[]) jpeg= preprocess_image(image_string) print('jpeg,jpeg.shape[]', jpeg, jpeg.shape)

output_tensor_cls_prob,output_tensor_box_pred = tf.import_graph_def\ (tf.get_default_graph().as_graph_def(), input_map={'Placeholder:0': jpeg}, return_elements=['Reshape_2:0','rpn_bbox_pred/Reshape_1:0'])

tensor_info_input = tf.saved_model.utils.build_tensor_info(raw_image) tensor_info_output_cls_prob = tf.saved_model.utils.build_tensor_info(output_tensor_cls_prob) tensor_info_output_box_pred = tf.saved_model.utils.build_tensor_info(output_tensor_box_pred)

'''

crop_resize_img,crop_resize_im_info = resize_im(cv2img, result_boxes)

crop_resize_img,crop_resize_im_info = crop_resize_image(imageplaceholder_info, result_boxes)

output_crop_resize_img = tf.saved_model.utils.build_tensor_info(crop_resize_img)

output_crop_resize_img_info = tf.saved_model.utils.build_tensor_info(crop_resize_im_info)

----------

''' result_boxes= np.array(result_boxes, dtype=np.float32) result_boxes= tf.convert_to_tensor(result_boxes) tensor_info_output_boxes = tf.saved_model.utils.build_tensor_info(result_boxes)

prediction_post_signature = ( tf.saved_model.signature_def_utils.build_signature_def( inputs={'images': tensor_info_input}, outputs={'detection_boxes': tensor_info_output_boxes},

outputs={'detection_boxes': tensor_info_output_boxes,

      #      'resize_im_info':im_info_output,
      #      'crop_resize_img': output_crop_resize_img,
      #      'crop_resize_im_info': output_crop_resize_img_info,},
    method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME

))

builder.add_meta_graph_and_variables( sess, [tf.saved_model.tag_constants.SERVING], signature_def_map={

'predict_images':prediction_signature,

      'predict_images_post': prediction_post_signature

}) builder.save(as_text=False)

if name == 'main': export()

#######################################

No.2 : test script

###################################### import cv2 import numpy as np import os import base64 import json import requests import tensorflow as tf

image = r"/home/kspook/text-detection-ctpn/data/demo/006.jpg" URL="http://localhost:9001/v1/models/ctpn:predict" headers = {"content-type": "application/json"} image_content = base64.b64encode(open(image,'rb').read()).decode("utf-8") body={ "signature_name": "predict_images_post", "inputs": [ image_content ] } r= requests.post(URL, data=json.dumps(body), headers = headers) print(r.text)

@elvys-zhang, I think exporting would be wrong. So, I found different style exporting script whose owner claimed to be successful. But he didn't share all script. .....

elvys-zhang commented 5 years ago

@kspook
It seems like you could add more outputs you want here.

tf.saved_model.signature_def_utils.build_signature_def(
    inputs={'images': tensor_info_input},
    outputs={'detection_boxes': tensor_info_output_boxes,
        'output2': tensor_info_of_output2,
        'output3': tensor_info_of_output3},
    method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
))

Or would you mind upload your .py file? The comment above seems not formatted well. Thx.

misterpeddy commented 5 years ago

Please feel free to re-open if this requires anything from TF serving.

ciel-zhang commented 4 years ago

how to export saved_model in tensorflow2 with signature_def_map?

ciel-zhang commented 4 years ago

@misterpeddy can i reopen this issue?

misterpeddy commented 4 years ago

What exactly are you trying to specify? Does the documentation (provide the signatures argument to saved_model.save()) not help?