Closed kspook closed 5 years ago
@kspook How do you test the saved model:)
tensorflow_model_server --port=9000 --rest_api_port=9001 --model_name=crnn --model_base_path=/home/kspook/CRNN_Tensorflow/model/crnn_syn90k_saved_model
curl -X POST http://localhost:9001/v1/models/crnn:predict -H 'cache-control: no-cache' -H 'content-type: application/json' -d '{ "inputs": ...
@kspook I think you're supposed to modified the code from tools/export_saved_model.py to fix this:)
I found it with saved_model_cli. ^^ 'outputs'
Thanks, anyway ^^
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['outputs']:
The given SavedModel SignatureDef contains the following input(s):
inputs['input_tensor'] tensor_info:
dtype: DT_FLOAT
shape: (1, 32, 100, 3)
name: input_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['prediction'] tensor_info:
dtype: DT_INT64
shape: (-1, -1)
name:
Method name is: tensorflow/serving/predict
@MaybeShewill-CV , so you never tried to run tensorflow serving for this model?
I want to try, but placerholder must be string type , but model input_tensor is float32. I can't convert to that.
I also asked this issue at tensorflow serving. ( https://github.com/tensorflow/serving/issues/994#issuecomment-492546938 )
In case of flask, input_tensor (float32) works well. Do you know about this?
@kspook The saved model was not used in tensorflow serving in my local environment==!
I am trying to make tensorflow serving, but I have error, because 'saved_model' doesn't have name for output. https://github.com/tensorflow/serving/issues/1100
But, I can't change the code. T.T can you help me?
error message.
python tools/test_crnn.py { "error": "Tensor :0, specified in either feed_devices or fetch_devices was not found in the Graph" }
saved_model_cli show --dir model/crnn_syn90k_saved_model/1/ --all
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['outputs']:
The given SavedModel SignatureDef contains the following input(s):
inputs['input_tensor'] tensor_info:
dtype: DT_FLOAT
shape: (1, 32, 100, 3)
name: input_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['prediction'] tensor_info:
dtype: DT_INT64
shape: (-1, -1)
name:
Method name is: tensorflow/serving/predict
3.test script.
import cv2
import numpy as np
import os
import base64
import json
import requests
import tensorflow as tf
#image = r"/home/kspook/CRNN_Tensorflow/data/test_images/test_01.jpg"
image = cv2.imread("/home/kspook/CRNN_Tensorflow/data/test_images/test_01.jpg", cv2.IMREAD_COLOR)
image = image.astype(np.float32) / 255
#image = np.array(image, np.float32) / 127.5 - 1.0
#image = np.expand_dims(image, 0)
image = image.tolist()
URL="http://localhost:9001/v1/models/crnn:predict"
headers = {"content-type": "application/json"}
body={
"signature_name": "outputs",
"inputs": [
image
]
}
r= requests.post(URL, data=json.dumps(body), headers = headers)
print(r.text)
@kspook I have used tensorflow saved model for tensorflow serving here https://github.com/MaybeShewill-CV/nsfw-classification-tensorflow you may check if you can get something useful. I will see this problem too once I got spare time:)
Thank you,
This nsfw-classification-tensorflow model has a name for output['prediction'] tensor_info
In decodes, _= tf.nn.ctc_beam_search_decoder() in CRNN, 'decodes' is a list of length top_paths, where decodes[j] is a SparseTensor containing the decoded outputs. So, once I can add a name decodes[0], I can solve.
Some posting suggested to create to builde the TensorProto for the Sparse Tensor separately. However, I can't get the evidence of correct answer.
1) https://stackoverflow.com/questions/52373193/tensor-not-found-with-empty-name-when-serving-the-model.
2) https://github.com/tensorflow/tensorflow/issues/22396#issuecomment-424206752
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['classify_result']: The given SavedModel SignatureDef contains the following input(s): inputs['input_tensor'] tensor_info: dtype: DT_FLOAT shape: (1, 224, 224, 3) name: input_tensor:0 The given SavedModel SignatureDef contains the following output(s): outputs['prediction'] tensor_info: dtype: DT_FLOAT shape: (1, 5) name: nsfw_cls_model/final_prediction:0 Method name is: tensorflow/serving/classify
2. separate proto
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['outputs']: The given SavedModel SignatureDef contains the following input(s): inputs['input_tensor'] tensor_info: dtype: DT_FLOAT shape: (1, 32, 100, 3) name: input_tensor:0 The given SavedModel SignatureDef contains the following output(s): outputs['dense_shape'] tensor_info: dtype: DT_INT64 shape: (2) name: CTCBeamSearchDecoder:2 outputs['indices'] tensor_info: dtype: DT_INT64 shape: (-1, 2) name: CTCBeamSearchDecoder:0 outputs['values'] tensor_info: dtype: DT_INT64 shape: (-1) name: CTCBeamSearchDecoder:1 Method name is: tensorflow/serving/predict
I was able to get this model to partially work with tf-serving, by decomposing the decodes[0] SparseTensor into indices, values, and dense_shape. This snippet explicitly names the output of the individual parts of the decodes[0] tensor (see https://github.com/tensorflow/tensorflow/issues/22396). I then modify data_provider.tf_io_pipeline_fast_tools:sparse_tensor_to_str
to accept these inputs separately.
# from export_saved_model.py#85
indices_output_tensor_info = tf.saved_model.utils.build_tensor_info(decodes[0].indices)
values_output_tensor_info = tf.saved_model.utils.build_tensor_info(decodes[0].values)
dense_shape_output_tensor_info = tf.saved_model.utils.build_tensor_info(decodes[0].dense_shape)
[...]
# from export_saved_model.py#111
# build SignatureDef protobuf
signatur_def = sm.signature_def_utils.build_signature_def(
inputs={'input_tensor': saved_input_tensor},
outputs = {
'decodes_indices':indices_output_tensor_info,
'decodes_values':values_output_tensor_info,
'decodes_dense_shape':dense_shape_output_tensor_info,
},
method_name=sm.signature_constants.PREDICT_METHOD_NAME,
)
# add graph into MetaGraphDef protobuf
saved_builder.add_meta_graph_and_variables(
sess,
tags=[sm.tag_constants.SERVING],
# line below adds 'serving_default' to signature def
signature_def_map={sm.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signatur_def},
)
This then works on some of my example images, but seems to segfault on others without a stack trace. I think the problem has to do with the dynamic sizing of the bidirectional dynamic RNN (could there be a fixed batch size somewhere?). Happy to turn this into its own ticket for discussion. Cheers
Btw, since someone might run into the same segfault issue, I solved it. Turns out using tf-serving out of the box with this model will cause segfaults on images with a width over 100. Changing the 100
to None
in line 32 of config/global_config.py
seems to stop the segfaulting:
__C.ARCH.INPUT_SIZE = (None, 32) # synth90k dataset
@eldon , How did you change data_provider.tf_io_pipeline_fast_tools:sparse_tensor_to_str
?
I was able to get this model to partially work with tf-serving, by decomposing the decodes[0] SparseTensor into indices, values, and dense_shape. This snippet explicitly names the output of the individual parts of the decodes[0] tensor (see tensorflow/tensorflow#22396). I then modify
data_provider.tf_io_pipeline_fast_tools:sparse_tensor_to_str
to accept these inputs separately.# from export_saved_model.py#85 indices_output_tensor_info = tf.saved_model.utils.build_tensor_info(decodes[0].indices) values_output_tensor_info = tf.saved_model.utils.build_tensor_info(decodes[0].values) dense_shape_output_tensor_info = tf.saved_model.utils.build_tensor_info(decodes[0].dense_shape) [...] # from export_saved_model.py#111 # build SignatureDef protobuf signatur_def = sm.signature_def_utils.build_signature_def( inputs={'input_tensor': saved_input_tensor}, outputs = { 'decodes_indices':indices_output_tensor_info, 'decodes_values':values_output_tensor_info, 'decodes_dense_shape':dense_shape_output_tensor_info, }, method_name=sm.signature_constants.PREDICT_METHOD_NAME, ) # add graph into MetaGraphDef protobuf saved_builder.add_meta_graph_and_variables( sess, tags=[sm.tag_constants.SERVING], # line below adds 'serving_default' to signature def signature_def_map={sm.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signatur_def}, )
This then works on some of my example images, but seems to segfault on others without a stack trace. I think the problem has to do with the dynamic sizing of the bidirectional dynamic RNN (could there be a fixed batch size somewhere?). Happy to turn this into its own ticket for discussion. Cheers
@kspook I ended up adding my own function to tf_io_pipeline_fast_tools.py
and calling that one from the client script from the outputs of the API call. All this does is expand the arguments to accept the tensor components directly. Cheers
def unpack_sparse_tensor_to_str(self, indices, values, dense_shape):
values = np.array([self._ord_map[str(tmp) + '_index'] for tmp in values])
number_lists = np.ones(dense_shape, dtype=values.dtype)
str_lists = []
res = []
for i, index in enumerate(indices):
number_lists[index[0], index[1]] = values[i]
for number_list in number_lists:
# Translate from ord() values into characters
str_lists.append([self.int_to_char(val) for val in number_list])
for str_list in str_lists:
# int_to_char() returns '\x00' for an input == 1, which is the default
# value in number_lists, so we skip it when building the result
res.append(''.join(c for c in str_list if c != '\x00'))
return res
@eldon, according to your solution, 'test_load_saved_model' has an error. So, we have two C.ARCH.INPUT_SIZE.
interpolation=cv2.INTER_LINEAR TypeError: an integer is required (got type NoneType)
Btw, since someone might run into the same segfault issue, I solved it. Turns out using tf-serving out of the box with this model will cause segfaults on images with a width over 100. Changing the
100
toNone
in line 32 ofconfig/global_config.py
seems to stop the segfaulting:__C.ARCH.INPUT_SIZE = (None, 32) # synth90k dataset
@eldon, do you know how to get string at placehoder? I made it with float, but I can't with string. @MaybeShewill-CV, why don't you update export_saved_model.py with @eldon's? ^^
def preprocess_image(image_buffer):
"""Preprocess JPEG encoded bytes to 3D float Tensor."""
image_size = tuple(CFG.ARCH.INPUT_SIZE)
image = tf.image.decode_image(image_buffer, channels=3)
image = tf.reshape (image, [1, image_size[1], image_size[0], 3])
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
return image
# build inference tensorflow graph
image_size = tuple(CFG.ARCH.INPUT_SIZE)
'''
image_tensor = tf.placeholder(
dtype=tf.float32,
shape=[1, image_size[1], image_size[0], 3],
name='input_tensor')
'''
raw_image = tf.placeholder(tf.string, name='tf_box')
feature_configs = {
'image/encoded': tf.FixedLenFeature(
shape=[], dtype=tf.string),
}
tf_example = tf.parse_example(raw_image , feature_configs)
jpegs = tf_example['image/encoded']
image_string = tf.reshape(jpegs, shape=[])
image_tensor = preprocess_image(image_string)
import cv2
import numpy as np
import os
import base64
import json
import requests
import tensorflow as tf
image = r"/home/kspook/CRNN_Tensorflow/data/test_images/test_01.jpg"
'''
image = cv2.imread("/home/kspook/CRNN_Tensorflow/data/test_images/test_01.jpg", cv2.IMREAD_COLOR)
image = image.astype(np.float32) / 255
image = image.tolist()
'''
URL="http://localhost:9001/v1/models/crnn:predict"
#URL = "http://{HOST:port}/v1/models/<modelname>/versions/1:classify"
headers = {"content-type": "application/json"}
image_content = base64.b64encode(open(image,'rb').read()).decode("utf-8")
body={
"signature_name": "serving_default",
#"signature_name": "outputs",
"inputs": [
image_content
]
}
r= requests.post(URL, data=json.dumps(body), headers = headers)
print(r.text)
{ "error": "JSON Value: \"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAwICQoJBwwKCQoNDAwOER0TERAQESMZGxUdKiUsKyklKCguNEI4LjE/MigoOk46P0RHSktKLTdRV1FIVkJJSkf/2wBDAQwNDREPESITEyJHMCgwR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0f/wAARCAAfAHQDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDZs9FF1awSfa0jmud/kxFCQ23rlh0qKHRrqeyjuYnhbzd2yLfh2wcHAPXpV+0hN9p2nLBdrbm283zpA4Biycg4yDzVzSDCLDS3YO1yiztAgICucnIJ/lQBz0GmXtxbfaILd5I84yuCfy61AYJhCJjDIIj0cqdp/Gtki3XwvbG4FxuLyNGYsYDdBuzVzTpWRNJgZj5DQzNKh+6w56jvQBytFdHJcSifRLaTY6tEgYOit8rMB3HoBVeX7Pc6tqEL20KLDDKsQiXaMqSQxx34oAxaK3bzSbWBGdgyrFZ5dkbIM27b37Z/lTDo9r9iGJphdfY/tZ4Gzb6euaAMWjNak+i+Xbeat2hZFjM6MpXy9+Mc856/pUV5pE9r5I863macr5axvljnocHHHvQBnk0tTmwu9zKtvIzLIYiFG75gCSOOvANRtFJGDvjdcMVO4EYI6j60AMpaKKACiiigBKVWKkMpII5BFJRQBZh1C9gAWK7mRR0UOcflUiavfpafZVnxDtKbdi9D1GcZqlRQBen1WaWa0laKEPa42FVxkDBAPPTjtjqaZaagYdUN7LGJN5csgOM7gc9c+tU2pKANa51g3Gjm1YMJnmLuwHyspJbH/fRqee/tPskk0VwxmktltlgKH92BjPzdD0P51hUUAdFrOo211ZXJsniUvMivxhpEC8Hn0Oeg9KbsS48W20MEqvFD5YRlOQQig9voawKMGgDrradZPsd1DbNHFJJPdSqGL4KqVzn3J6VHpSRzWNnG8jCbzftsjOeuH29/auainmgz5M0kZIwdrEcenFPS7uI87ZW5jMRyc/Ie3PagDX1KWQWdpAksUjzR+Y8XlbnLSEtwSDjr65qa6j017i1hlgSKF5cxzxptVosY2sc53bsA56e1ZX9q3JlhkcQs8TKysYlB46DIA4qeLWnVwr20X2fyni8qMleHOWIPJzmgBuoR2Ec6xtBLbSquJYkYsFbJ7t14weOOaKq6hcm9u2m8vYCAoXduwAMDJ7/WigD/2Q==\" Type: String is not of expected type: float" }
@eldon You may pull a request if you're willing to:)
Ahh @kspook , I got an error at test_load_saved_model, but ignore the error (since tf-serving loads the model differently and expects different inputs/outputs). It works fine in practice.
Another thing to consider is we noticed different results when leaving __C.ARCH.INPUT_SIZE = (100, 32)
and resizing all input images to 100x32 versus allowing dynamic inputs. I'm not sure which of the results is "better" -- we haven't run benchmarks yet.
@kspook I haven't tried loading base64 encoded images directly. I think that would have to become part of the exported model for it to work in the request. For now, I use cv2 to read the image and send the array.aslist() to tf-serving. I'd like to get string input to work, but it'll likely be further in the future...
@MaybeShewill-CV sure! I'm a bit busy this and next week but would be glad to. I'll see what I can come up with :)
@eldon, thank you for you response. In case of base64, __C.ARCH.INPUT_SIZE = (None, 32) produces a compile error. So, it should not be 'None' (?).
In addition, can you put characters at decodes_values? Currently, it shows just numbers.
"decodes_values": [
14,
33,
20,
8,
26,
31,
4,
34,
33,
31,
19,
26,
29,
20
],
@eldon Thanks a lot:)
@kspook The model now can not well support dynamic image length since the sequence length hyperparameter is related to the input image width:)
@MaybeShewill-CV , can you make exported file to display character instead of index ?
@eldon, thank you for you response. In case of base64, __C.ARCH.INPUT_SIZE = (None, 32) produces a compile error. So, it should not be 'None' (?). In addition, can you put characters at decodes_values? Currently, it shows just numbers.
"decodes_values": [ 14, 33, 20, 8, 26, 31, 4, 34, 33, 31, 19, 26, 29, 20 ],
@kspook don't forget to pass the output of the call to codec.unpack_sparse_tensor_to_str
as I define above. :)
@eldon, I have an error (however, it works well at test_load_saved_model)
Traceback (most recent call last):
File "tools/export_saved_model.py", line 232, in <module>
build_saved_model(args.ckpt_path, args.export_dir, args.char_dict_path, args.ord_map_dict_path)
File "tools/export_saved_model.py", line 158, in build_saved_model
prediction_val = codec.unpack_sparse_tensor_to_str(indices_output_tensor_info, values_output_tensor_info, dense_shape_output_tensor_info)
File "/home/kspook/CRNN_Tensorflow/data_provider/tf_io_pipline_fast_tools.py", line 242, in unpack_sparse_tensor_to_str
values = np.array([self._ord_map[str(tmp) + '_index'] for tmp in values])
TypeError: 'TensorInfo' object is not iterable
@kspook don't forget to pass the output of the call to
codec.unpack_sparse_tensor_to_str
as I define above. :)
@kspook I have updated the code and the saved model file. You may test the python client in docker file. You're welcome to give any useful suggestions here:) @eldon Thanks for your valuable prompt on sparse tensor problem in tensorflow serving. Feel free to give any useful suggestions here:)
@MaybeShewill-CV , Thank you very much.
Still I have two issue. One is major, another one is minor. Major is to run curl script with string input. Minor one is to get result with character at 'SignatureDefs' (To get character like test_load_saved_model) ^^
@kspook I have updated the code and the saved model file. You may test the python client in docker file. You're welcome to give any useful suggestions here:) @eldon Thanks for your valuable prompt on sparse tensor problem in tensorflow serving. Feel free to give any useful suggestions here:)
@kspook 1.I have not tested using http/rest api to deploy this model 2.The python client will give you the exact output words you want you may try it first:)
@MaybeShewill-CV , thank you. I had an error for test script. So it produced saved_model. Yet I had an error for test script.
python tools/test_crnn.py { "error": "Expected len(indices) == values.shape[0], but saw: 25 vs. 29\n\t [[{{node shadow_net/sequence_rnn_module/stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/fw/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3}}]]" }
@kspook Are you sure you have the same config as I pushed here?:)
@MaybeShewill-CV, I use python3.7. I had similar situation with Elon's. Once I download Elon's repository, it solved.
Did you used the same checkpoint in the README.md?
Evaluate the model on the synth90k dataset
In this repo you will find a model pre-trained on the Synth 90kdataset. When the tfrecords file of synth90k dataset has been successfully generated you may evaluated the model by the following script
The pretrained crnn model weights on Synth90k dataset can be found here
python tools/evaluate_shadownet.py --dataset_dir PATH/TO/YOUR/DATASET_DIR
--weights_path PATH/TO/YOUR/MODEL_WEIGHTS_PATH
--char_dict_path PATH/TO/CHAR_DICT_PATH
--ord_map_dict_path PATH/TO/ORD_MAP_PATH
--process_all 1 --visualize 1
if I change 25--> 29, I could't even export.
After successful exporting with 25, I had the same error.
@kspook The sequence length can not be changed==! You need recover this config parameter:)
@MaybeShewill-CV
If I change __C.ARCH.INPUT_SIZE = (None, 32) # synth90k dataset, I can run test script. (But I need to omit test_load_saved_model)
Unfortunately, I have the same result for decodes_values.
In my opinion, decodes_values should be numbers like the below one, because it didn't go through codec.sparse_tensor_to_str_for_tf_serving.
{
"outputs": {
"decode_dense_shape": [
1,
14
],
"decode_indices": [
[
0,
0
],
[
0,
1
],
[
0,
2
],
[
0,
3
],
[
0,
4
],
[
0,
5
],
[
0,
6
],
[
0,
7
],
[
0,
8
],
[
0,
9
],
[
0,
10
],
[
0,
11
],
[
0,
12
],
[
0,
13
]
],
"decode_values": [
14,
33,
20,
8,
26,
31,
4,
34,
33,
31,
19,
26,
29,
20
]
}
}
@kspook The decode values is the same as you said. It's a list of int64 values:)
@MaybeShewill-CV, I want to get characters. Is there anyway to get it?
@kspook The python client will give you characters prediction. Have you ever updated the repo and tested it==!
@MaybeShewill-CV Looks like you used the snippets I posted above and committed to your repo, so I no longer need to do a PR?
@MaybeShewill-CV actually since a lot of the new code comes from my repo, would you mind crediting authorship? Thanks,
@eldon You may pull your request and I will remove the same code form my part:)
@eldon Or how do you want me to credit the authorship please let me know. I'd love to credit your authorship:)
My question is that I get the character in the saved_model without python client.
In addition, With __C.ARCH.INPUT_SIZE = (None, 32), I can run python client. But it can run test_load_saved_model. Thus, I can't check character as a result, anyway.
@kspook The python client will give you characters prediction. Have you ever updated the repo and tested it==!
@kspook Don't get your point:)
I mean I have an compile(?) error without change __C.ARCH.INPUT_SIZE = (100, 32) to __C.ARCH.INPUT_SIZE = (None, 32)
@kspook Don't get your point:)
@kspook Compile error? Error details and which script do you use and how you use it:)
I made comment a lot. ^^
@kspook You mean you met the error when you use heep/rest api when test python client?
@eldon, do you know how to get string at placehoder? I made it with float, but I can't with string. @MaybeShewill-CV, why don't you update export_saved_model.py with @eldon's? ^^
1. changed code for export_saved_model.py
def preprocess_image(image_buffer): """Preprocess JPEG encoded bytes to 3D float Tensor.""" image_size = tuple(CFG.ARCH.INPUT_SIZE) image = tf.image.decode_image(image_buffer, channels=3) image = tf.reshape (image, [1, image_size[1], image_size[0], 3]) image = tf.image.convert_image_dtype(image, dtype=tf.float32) return image
# build inference tensorflow graph image_size = tuple(CFG.ARCH.INPUT_SIZE) ''' image_tensor = tf.placeholder( dtype=tf.float32, shape=[1, image_size[1], image_size[0], 3], name='input_tensor') ''' raw_image = tf.placeholder(tf.string, name='tf_box') feature_configs = { 'image/encoded': tf.FixedLenFeature( shape=[], dtype=tf.string), } tf_example = tf.parse_example(raw_image , feature_configs) jpegs = tf_example['image/encoded'] image_string = tf.reshape(jpegs, shape=[]) image_tensor = preprocess_image(image_string)
1. test script
import cv2 import numpy as np import os import base64 import json import requests import tensorflow as tf image = r"/home/kspook/CRNN_Tensorflow/data/test_images/test_01.jpg" ''' image = cv2.imread("/home/kspook/CRNN_Tensorflow/data/test_images/test_01.jpg", cv2.IMREAD_COLOR) image = image.astype(np.float32) / 255 image = image.tolist() ''' URL="http://localhost:9001/v1/models/crnn:predict" #URL = "http://{HOST:port}/v1/models/<modelname>/versions/1:classify" headers = {"content-type": "application/json"} image_content = base64.b64encode(open(image,'rb').read()).decode("utf-8") body={ "signature_name": "serving_default", #"signature_name": "outputs", "inputs": [ image_content ] } r= requests.post(URL, data=json.dumps(body), headers = headers) print(r.text)
1. error message $ python tools/test_crnn.py `{ "error": "JSON Value: \"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAwICQoJBwwKCQoNDAwOER0TERAQESMZGxUdKiUsKyklKCguNEI4LjE/MigoOk46P0RHSktKLTdRV1FIVkJJSkf/2wBDAQwNDREPESITEyJHMCgwR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0dHR0f/wAARCAAfAHQDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDZs9FF1awSfa0jmud/kxFCQ23rlh0qKHRrqeyjuYnhbzd2yLfh2wcHAPXpV+0hN9p2nLBdrbm283zpA4Biycg4yDzVzSDCLDS3YO1yiztAgICucnIJ/lQBz0GmXtxbfaILd5I84yuCfy61AYJhCJjDIIj0cqdp/Gtki3XwvbG4FxuLyNGYsYDdBuzVzTpWRNJgZj5DQzNKh+6w56jvQBytFdHJcSifRLaTY6tEgYOit8rMB3HoBVeX7Pc6tqEL20KLDDKsQiXaMqSQxx34oAxaK3bzSbWBGdgyrFZ5dkbIM27b37Z/lTDo9r9iGJphdfY/tZ4Gzb6euaAMWjNak+i+Xbeat2hZFjM6MpXy9+Mc856/pUV5pE9r5I863macr5axvljnocHHHvQBnk0tTmwu9zKtvIzLIYiFG75gCSOOvANRtFJGDvjdcMVO4EYI6j60AMpaKKACiiigBKVWKkMpII5BFJRQBZh1C9gAWK7mRR0UOcflUiavfpafZVnxDtKbdi9D1GcZqlRQBen1WaWa0laKEPa42FVxkDBAPPTjtjqaZaagYdUN7LGJN5csgOM7gc9c+tU2pKANa51g3Gjm1YMJnmLuwHyspJbH/fRqee/tPskk0VwxmktltlgKH92BjPzdD0P51hUUAdFrOo211ZXJsniUvMivxhpEC8Hn0Oeg9KbsS48W20MEqvFD5YRlOQQig9voawKMGgDrradZPsd1DbNHFJJPdSqGL4KqVzn3J6VHpSRzWNnG8jCbzftsjOeuH29/auainmgz5M0kZIwdrEcenFPS7uI87ZW5jMRyc/Ie3PagDX1KWQWdpAksUjzR+Y8XlbnLSEtwSDjr65qa6j017i1hlgSKF5cxzxptVosY2sc53bsA56e1ZX9q3JlhkcQs8TKysYlB46DIA4qeLWnVwr20X2fyni8qMleHOWIPJzmgBuoR2Ec6xtBLbSquJYkYsFbJ7t14weOOaKq6hcm9u2m8vYCAoXduwAMDJ7/WigD/2Q==\" Type: String is not of expected type: float" }`
@MaybeShewill-CV , no. ^^. i had an error when i made export file including test_load_saved_model.
@kspook ==! Could you please specify your question again including the script you use and how you use it. I really have no idea what're talking about:)
@MaybeShewill-CV , sorry I was confused.
@kspook The error occured when you use tools/export_saved_model.py ?
@MaybeShewill-CV, no or yes. Exactly I used 'export_crnn xxxx.sh'.
@kspook If you change the CFG.ARCH.INPUT_SIZE into [None, 32] you should change the sequence length in beam search function line 80 in tools/export_saved_model.py dynamically which is not supported here for now. I think you may need to make your own saved model according to your unique demand:)
@MaybeShewill-CV ok, I'll make a PR today! I'll also include the docker runfiles I used.
For some reason, I'm having trouble with the new model weights too, but it works well with the old weights (the ones still in my repo). This could also be a bug on my end... Happy to move that to another ticket. Cheers
@MaybeShewill-CV thank you for your code.
I have error. { "error": "Serving signature name: \"serving_default\" not found in signature def" } How can I solve?
----- script -----------