Closed AliGharbali closed 5 years ago
@AliGharbali can you provide your model and your custom class file if you have one? I need to reproduce your result to see what's happening.
Edit: closed issue by mistake
@bendangnuksung thanks for the answer it would be great if you can reproduce the results. You can download them from here.
Hi @AliGharbali! I was successful to convert your model to Tensorflow model and Serving model. Have you set your saving model path correctly?
@bendangnuksung thanks for making this conversion so easy. I have the same observation as @AliGharbali. There is no files under the folder "variables". I was able to serve it though. I haven't yet compare the results from serving to the 'correct' ones. I will update once I have the comparison. I am new to tensorflow in general. So I am not sure what's the difference whether having files or not in folder "variables"...
@cfengai The Variables folder is indeed empty however you can still serve and inference it. Variables folder do not always need to have files in it. @AliGharbali Sorry! I misunderstood your question, I thought you meant the model save path does not contain any 'saved_model'. I understood now, thanks to @cfengai
@bendangnuksung No problem, I will try with empty variable folder and check the results. @cfengai we have indeed same issue, thanks for mentioning your issue here and waiting to hear from you soon. I will keep updating this post as well!
@bendangnuksung Hi again, I successfully serve the model and now I am writing the client api to be able to do object detection. Meanwhile I have a question and would be very happy if you can put time for it. I want to define "signature_name" for my model which is not defined in the "signature_def_map". How should I properly define it? Also, if you can help me create the client api or share a useful link, it would be great.
@bendangnuksung First want to report that the model serves fine with single image inference. Thanks a lot! The other thing I tried is to do multiple images at a time but failed. After I changed IMAGES_PER_GPU=2. The converted model reports a weird/wrong meta info for output mrcnn_detection/Reshape_1.
I was expecting something like:
outputs['mrcnn_detection/Reshape_1'] tensor_info:
dtype: DT_FLOAT
shape: (2, 20, 6)
name: mrcnn_detection/Reshape_1:0
@AliGharbali The 'signature_name' has already been defined with the Tensorflow default signature name which is 'serving_default'. You can still change the signature name in 'main.py'
# from:
sigs[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] = \
tf.saved_model.signature_def_utils.predict_signature_def(
{"input_image": input_image, 'input_image_meta': input_image_meta, 'input_anchors': input_anchors},
{"mrcnn_detection/Reshape_1": output_detection, 'mrcnn_mask/Reshape_1': output_mask})
# to:
sigs[YOUR_CUSTOM_NAME] = \
tf.saved_model.signature_def_utils.predict_signature_def(
{"input_image": input_image, 'input_image_meta': input_image_meta, 'input_anchors': input_anchors},
{"mrcnn_detection/Reshape_1": output_detection, 'mrcnn_mask/Reshape_1': output_mask})
And you can even change the name of input signature and output signature.
@cfengai It seems that the model has been structured in a way to inference a single image at a time. Needs to be fixed from the OP's end. Would be awesome if it can be fixed from our end. I would request you to open a new issue for this as it is diverging from the original issue. Thanks!
@AliGharbali I believed that the original issue has not caused any problem for the serving model. So closing this issue. Re-open anytime if you think if it is not fixed.
Dear bendangnuksung,
First of all thanks for the code it was really helpful. I used your code to create serving ready model. It was successfully done but there is an issue after running the code I was able to see
Finish converting keras model to Frozen PB PATH: ./frozen_model/
FINISH CONVERTING FROZEN PB TO SERVING READY PATH: ./serving_model
COMPLETED
But when I checked variables folder it was empty. Would you please help me with this issue?