matterport / Mask_RCNN

Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow
Other
24.69k stars 11.71k forks source link

how to build frozen .pb for this model #1569

Open angyee opened 5 years ago

angyee commented 5 years ago

@moorage @waleedka @haeric @rymalia @PavlosMelissinos @karanchahal

buaacarzp commented 5 years ago

I am in same trouble with you .

121649982 commented 5 years ago

import tensorflow as tf from keras import backend as K from tensorflow.python.framework import graph_util

model_keras = model.keras_model

All new operations will be in test mode from now on.

K.set_learning_phase(0)

Create output layer with customized names

num_output = 7 pred_node_names = ["detections", "mrcnn_class", "mrcnn_bbox", "mrcnn_mask", "rois", "rpn_class", "rpn_bbox"] pred_nodenames = ["output" + name for name in pred_node_names] pred = [tf.identity(model_keras.outputs[i], name=pred_node_names[i]) for i in range(num_output)]

sess = K.get_session()

Get the object detection graph

od_graph_def = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), pred_node_names)

model_dirpath = os.path.dirname("model/") if not os.path.exists(model_dirpath): os.mkdir(model_dirpath) filename = 'mrcnn_model.pb' pb_filepath = os.path.join(model_dirpath, filename) print('Saving frozen graph {} ...'.format(os.path.basename(pb_filepath)))

frozen_graph_path = pb_filepath with tf.gfile.GFile(frozen_graph_path, 'wb') as f: f.write(od_graph_def.SerializeToString())

angyee commented 5 years ago

import tensorflow as tf from keras import backend as K from tensorflow.python.framework import graph_util

model_keras = model.keras_model

All new operations will be in test mode from now on.

K.set_learning_phase(0)

Create output layer with customized names

num_output = 7 pred_node_names = ["detections", "mrcnn_class", "mrcnn_bbox", "mrcnn_mask", "rois", "rpn_class", "rpn_bbox"] pred_nodenames = ["output" + name for name in pred_node_names] pred = [tf.identity(model_keras.outputs[i], name=pred_node_names[i]) for i in range(num_output)]

sess = K.get_session()

Get the object detection graph

od_graph_def = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), pred_node_names)

model_dirpath = os.path.dirname("model/") if not os.path.exists(model_dirpath): os.mkdir(model_dirpath) filename = 'mrcnn_model.pb' pb_filepath = os.path.join(model_dirpath, filename) print('Saving frozen graph {} ...'.format(os.path.basename(pb_filepath)))

frozen_graph_path = pb_filepath with tf.gfile.GFile(frozen_graph_path, 'wb') as f: f.write(od_graph_def.SerializeToString())

not understand how to do step by step explain.

i am running

sudo python3 keras_to_tensorflow.py --input_model=/home/deepedge/mask_rcnn-master/mask_rcnn_damage_0010.h5 --output_model=/home/deepedge/mask_rcnn-master/dent.pb

and getting

Using TensorFlow backend. E0619 15:22:44.379571 140399717234496 keras_to_tensorflow.py:95] Input file specified only holds the weights, and not the model definition. Save the model using model.save(filename.h5) which will contain the network architecture as well as its weights. If the model is saved using the model.save_weights(filename) function, either input_model_json or input_model_yaml flags should be set to to import the network architecture prior to loading the weights. Check the keras documentation for more details (https://keras.io/getting-started/faq/) Traceback (most recent call last): File "keras_to_tensorflow.py", line 182, in app.run(main) File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 300, in run _run_main(main, args) File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 251, in _run_main sys.exit(main(argv)) File "keras_to_tensorflow.py", line 128, in main model = load_model(FLAGS.input_model, FLAGS.input_model_json, FLAGS.input_model_yaml) File "keras_to_tensorflow.py", line 106, in load_model raise wrong_file_err File "keras_to_tensorflow.py", line 63, in load_model model = keras.models.load_model(input_model_path) File "/usr/local/lib/python3.6/dist-packages/keras/engine/saving.py", line 419, in load_model model = _deserialize_model(f, custom_objects, compile) File "/usr/local/lib/python3.6/dist-packages/keras/engine/saving.py", line 221, in _deserialize_model model_config = f['model_config'] File "/usr/local/lib/python3.6/dist-packages/keras/utils/io_utils.py", line 302, in getitem raise ValueError('Cannot create group in read only mode.') ValueError: Cannot create group in read only mode.

121649982 commented 5 years ago

I didn't use keras_to_tensorflow.py tool, I just save the .pb in the inference code,and it works

angyee commented 5 years ago

I didn't use keras_to_tensorflow.py tool, I just save the .pb in the inference code,and it works

how to solve my problem??

angyee commented 5 years ago

I am in same trouble with you .

you solved this problem??

kongjibai commented 5 years ago

I didn't use keras_to_tensorflow.py tool, I just save the .pb in the inference code,and it works

Yeah, just save the .pb model in the inference code, but remember save network structure to the .pb file.

angyee commented 5 years ago

I didn't use keras_to_tensorflow.py tool, I just save the .pb in the inference code,and it works

Yeah, just save the .pb model in the inference code, but remember save network structure to the .pb file.

my problem is to convert .h5 to .pb file, you solved?

i did this

i am running

sudo python3 keras_to_tensorflow.py --input_model=/home/deepedge/mask_rcnn-master/mask_rcnn_damage_0010.h5 --output_model=/home/deepedge/mask_rcnn-master/dent.pb

and getting

Using TensorFlow backend. E0619 15:22:44.379571 140399717234496 keras_to_tensorflow.py:95] Input file specified only holds the weights, and not the model definition. Save the model using model.save(filename.h5) which will contain the network architecture as well as its weights. If the model is saved using the model.save_weights(filename) function, either input_model_json or input_model_yaml flags should be set to to import the network architecture prior to loading the weights. Check the keras documentation for more details (https://keras.io/getting-started/faq/) Traceback (most recent call last): File "keras_to_tensorflow.py", line 182, in app.run(main) File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 300, in run _run_main(main, args) File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 251, in _run_main sys.exit(main(argv)) File "keras_to_tensorflow.py", line 128, in main model = load_model(FLAGS.input_model, FLAGS.input_model_json, FLAGS.input_model_yaml) File "keras_to_tensorflow.py", line 106, in load_model raise wrong_file_err File "keras_to_tensorflow.py", line 63, in load_model model = keras.models.load_model(input_model_path) File "/usr/local/lib/python3.6/dist-packages/keras/engine/saving.py", line 419, in load_model model = _deserialize_model(f, custom_objects, compile) File "/usr/local/lib/python3.6/dist-packages/keras/engine/saving.py", line 221, in _deserialize_model model_config = f['model_config'] File "/usr/local/lib/python3.6/dist-packages/keras/utils/io_utils.py", line 302, in getitem raise ValueError('Cannot create group in read only mode.') ValueError: Cannot create group in read only mode.

@moorage @waleedka @PavlosMelissinos @haeric @rymalia

ArashHosseini commented 4 years ago

freeze graph and export the pre-trained coco model as .pb, let me know

in hierarchy, save the content as .py file at the REAME level, change DEFAULT_WEIGHTS to the mask_rcnn_coco.h5 file

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import os
import sys
import warnings

import keras.backend as K
import tensorflow as tf

warnings.filterwarnings('ignore', category=FutureWarning)
# suppress warning and error message tf
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"

# Root directory of the project
ROOT_DIR = os.getcwd()
# Import Mask RCNN
sys.path.append(ROOT_DIR)  # To find local version of the library
from mrcnn import model as modellib
from mrcnn import utils
from samples.coco import coco
K.clear_session()
K.set_learning_phase(0)

##############################################################################
# Load model
##############################################################################

# Model Directory
MODEL_DIR = os.path.join(os.path.dirname(__file__), "logs")
DEFAULT_WEIGHTS = os.path.join(os.path.dirname(__file__) , "samples/coco/mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(DEFAULT_WEIGHTS):
    utils.download_trained_weights(DEFAULT_WEIGHTS)

##############################################################################
# Load configuration
##############################################################################

class InferenceConfig(coco.CocoConfig):
        # Set batch size to 1 since we'll be running inference on
        # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
        GPU_COUNT = 1
        IMAGES_PER_GPU = 1

##############################################################################
# Save entire model function
##############################################################################

def h5_to_pb(h5_model, output_dir, model_name, out_prefix="output_"):
    out_nodes = []
    for i in range(len(h5_model.outputs)):
        out_nodes.append(out_prefix + str(i + 1))
        tf.identity(h5_model.output[i], out_prefix + str(i + 1))
    sess = K.get_session()
    init_graph = sess.graph.as_graph_def()
    main_graph = tf._api.v1.graph_util.convert_variables_to_constants(sess, init_graph, out_nodes)
    with tf.gfile.GFile(os.path.join(output_dir, model_name), "wb") as filemodel:
        filemodel.write(main_graph.SerializeToString())
    print("pb model: ", {os.path.join(output_dir, model_name)})

if __name__ == "__main__":
    config = InferenceConfig()
    config.display()
    # Create model in inference mode
    model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)

    # Set path to model weights
    weights_path = DEFAULT_WEIGHTS#model.find_last()
    # Load weights
    print("Loading weights ", weights_path)
    model.load_weights(weights_path, by_name=True)
    model.keras_model.summary()

    # make folder for full model
    model_dir = os.path.join(ROOT_DIR, "Model")
    if not os.path.exists(model_dir):
        os.makedirs(model_dir)

    # save h5 full model
    name_model = os.path.join(model_dir, "mask_rcnn_landing.h5")
    if not os.path.exists(name_model):
        model.keras_model.save(name_model)
        print("save model: ", name_model)

    # export pb model
    pb_name_model = "mask_rcnn_landing.pb"
    h5_to_pb(model.keras_model, output_dir=model_dir, model_name=pb_name_model)
    K.clear_session()
    sys.exit()

to run the graph on cpu replace model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)

    DEVICE = "/cpu:0"
    with tf.device(DEVICE):
        model = modellib.MaskRCNN(
            mode="inference", model_dir=MODEL_DIR, config=config)

to create the .pbtxt file create another .py file with following

# This file is useful for reading the contents of the ops generated by ruby.
# You can read any graph defination in pb/pbtxt format generated by ruby
# or by python and then convert it back and forth from human readable to binary format.

import tensorflow as tf
from google.protobuf import text_format
from tensorflow.python.platform import gfile

def pbtxt_to_graphdef(filename):
  with open(filename, 'r') as f:
    graph_def = tf.GraphDef()
    file_content = f.read()
    text_format.Merge(file_content, graph_def)
    tf.import_graph_def(graph_def, name='')
    tf.train.write_graph(graph_def, 'pbtxt/', 'protobuf.pb', as_text=False)

def graphdef_to_pbtxt(filename): 
  with gfile.FastGFile(filename,'rb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())
    tf.import_graph_def(graph_def, name='')
    tf.train.write_graph(graph_def, 'pbtxt/', 'protobuf.pbtxt', as_text=True)
  return

graphdef_to_pbtxt('path/to/created/graph.pb')  # here you can write the name of the file to be converted
# and then a new file will be made in pbtxt directory.
kaanaykutkabakci commented 4 years ago

freeze graph and export the pre-trained coco model as .pb, let me know

in hierarchy, save the content as .py file at the REAME level, change DEFAULT_WEIGHTS to the mask_rcnn_coco.h5 file

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import os
import sys
import warnings

import keras.backend as K
import tensorflow as tf

warnings.filterwarnings('ignore', category=FutureWarning)
# suppress warning and error message tf
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"

# Root directory of the project
ROOT_DIR = os.getcwd()
# Import Mask RCNN
sys.path.append(ROOT_DIR)  # To find local version of the library
from mrcnn import model as modellib
from mrcnn import utils
from samples.coco import coco
K.clear_session()
K.set_learning_phase(0)

##############################################################################
# Load model
##############################################################################

# Model Directory
MODEL_DIR = os.path.join(os.path.dirname(__file__), "logs")
DEFAULT_WEIGHTS = os.path.join(os.path.dirname(__file__) , "samples/coco/mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(DEFAULT_WEIGHTS):
    utils.download_trained_weights(DEFAULT_WEIGHTS)

##############################################################################
# Load configuration
##############################################################################

class InferenceConfig(coco.CocoConfig):
        # Set batch size to 1 since we'll be running inference on
        # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
        GPU_COUNT = 1
        IMAGES_PER_GPU = 1

##############################################################################
# Save entire model function
##############################################################################

def h5_to_pb(h5_model, output_dir, model_name, out_prefix="output_"):
    out_nodes = []
    for i in range(len(h5_model.outputs)):
        out_nodes.append(out_prefix + str(i + 1))
        tf.identity(h5_model.output[i], out_prefix + str(i + 1))
    sess = K.get_session()
    init_graph = sess.graph.as_graph_def()
    main_graph = tf._api.v1.graph_util.convert_variables_to_constants(sess, init_graph, out_nodes)
    with tf.gfile.GFile(os.path.join(output_dir, model_name), "wb") as filemodel:
        filemodel.write(main_graph.SerializeToString())
    print("pb model: ", {os.path.join(output_dir, model_name)})

if __name__ == "__main__":
    config = InferenceConfig()
    config.display()
    # Create model in inference mode
    model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)

    # Set path to model weights
    weights_path = DEFAULT_WEIGHTS#model.find_last()
    # Load weights
    print("Loading weights ", weights_path)
    model.load_weights(weights_path, by_name=True)
    model.keras_model.summary()

    # make folder for full model
    model_dir = os.path.join(ROOT_DIR, "Model")
    if not os.path.exists(model_dir):
        os.makedirs(model_dir)

    # save h5 full model
    name_model = os.path.join(model_dir, "mask_rcnn_landing.h5")
    if not os.path.exists(name_model):
        model.keras_model.save(name_model)
        print("save model: ", name_model)

    # export pb model
    pb_name_model = "mask_rcnn_landing.pb"
    h5_to_pb(model.keras_model, output_dir=model_dir, model_name=pb_name_model)
    K.clear_session()
    sys.exit()

to run the graph on cpu replace model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)

    DEVICE = "/cpu:0"
    with tf.device(DEVICE):
        model = modellib.MaskRCNN(
            mode="inference", model_dir=MODEL_DIR, config=config)

to create the .pbtxt file create another .py file with following

# This file is useful for reading the contents of the ops generated by ruby.
# You can read any graph defination in pb/pbtxt format generated by ruby
# or by python and then convert it back and forth from human readable to binary format.

import tensorflow as tf
from google.protobuf import text_format
from tensorflow.python.platform import gfile

def pbtxt_to_graphdef(filename):
  with open(filename, 'r') as f:
    graph_def = tf.GraphDef()
    file_content = f.read()
    text_format.Merge(file_content, graph_def)
    tf.import_graph_def(graph_def, name='')
    tf.train.write_graph(graph_def, 'pbtxt/', 'protobuf.pb', as_text=False)

def graphdef_to_pbtxt(filename): 
  with gfile.FastGFile(filename,'rb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())
    tf.import_graph_def(graph_def, name='')
    tf.train.write_graph(graph_def, 'pbtxt/', 'protobuf.pbtxt', as_text=True)
  return

graphdef_to_pbtxt('path/to/created/graph.pb')  # here you can write the name of the file to be converted
# and then a new file will be made in pbtxt directory.

Hello @ArashHosseini . Thanks for informations. But, i am trying to run MatterPort's Mask RCNN model by using OpenCV. When i generate the .pb and .pbtxt files, cv.readnetfromtensorflow gives me error as: "cv2.error: OpenCV(4.1.2) C:\projects\opencv-python\opencv\modules\dnn\src\tensorflow\tf_io.cpp:54: error: (-2:Unspecified error) FAILED: ReadProtoFromTextFile(param_file, param). Failed to parse GraphDef file: ./IG/protobuf.pbtxt in function 'cv::dnn::ReadTFNetParamsFromTextFileOrDie'"

How can i overcome this problem. Thanks...

chrigui94 commented 4 years ago

Hello @kaanaykutkabakci , did you find any solution for ReadNetfromTensorflow (.pb, .pbtxt) problem?

kaanaykutkabakci commented 4 years ago

Hello @chrigui94 , unfortunately, I could not find any solution. I have gave up converting MatterPorts Keras model to TF. I have built a custom Mask RCNN structure by using TensorFlow Object Detection API.

ZouJiu1 commented 4 years ago

convert-to pb file

#!encoding=utf-8
'''
#-----------------
Authors:邹九
Time:2019-11-21
#-----------------
'''
"""
Copyright (c) 2019, by the Authors: Amir H. Abdi
This script is freely available under the MIT Public License.
Please see the License file in the root for details.

The following code snippet will convert the keras model files
to the freezed .pb tensorflow weight file. The resultant TensorFlow model
holds both the model architecture and its associated weights.
"""

import tensorflow as tf
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import graph_io
from pathlib import Path
from absl import app
from absl import flags
from absl import logging
from mrcnn import model as modellib
from mrcnn.config import Config
import keras
import os
from keras import backend as K
from keras.models import model_from_json, model_from_yaml
from keras.utils.vis_utils import plot_model

COCO_MODEL_PATH = r'../logs/shapes20191113T1540_mask_rcnn_shapes_0199.h5'

K.set_learning_phase(0)
FLAGS = flags.FLAGS

flags.DEFINE_string('input_model', default=r'', help='Path to the input model.')
flags.DEFINE_string('input_model_json', None, 'Path to the input model '
                                              'architecture in json format.')
flags.DEFINE_string('input_model_yaml', None, 'Path to the input model architecture in yaml format.')
flags.DEFINE_string('output_model', default=r'./shapes20191113T1540_mask_rcnn_shapes_0199.pb', help='Path where the converted model will be stored.')
flags.DEFINE_boolean('save_graph_def', False,
                     'Whether to save the graphdef.pbtxt file which contains '
                     'the graph definition in ASCII format.')
flags.DEFINE_string('output_nodes_prefix', None,
                    'If set, the output nodes will be renamed to '
                    '`output_nodes_prefix`+i, where `i` will numerate the '
                    'number of of output nodes of the network.')
flags.DEFINE_boolean('quantize', False,
                     'If set, the resultant TensorFlow graph weights will be '
                     'converted from float into eight-bit equivalents. See '
                     'documentation here: '
                     'https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms')
flags.DEFINE_boolean('channels_first', False,
                     'Whether channels are the first dimension of a tensor. '
                     'The default is TensorFlow behaviour where channels are '
                     'the last dimension.')
flags.DEFINE_boolean('output_meta_ckpt', False,
                     'If set to True, exports the model as .meta, .index, and '
                     '.data files, with a checkpoint file. These can be later '
                     'loaded in TensorFlow to continue training.')

flags.mark_flag_as_required('input_model')
flags.mark_flag_as_required('output_model')

def load_model(input_model_path, input_json_path=None, input_yaml_path=None):
    if not Path(input_model_path).exists():
        raise FileNotFoundError(
            'Model file `{}` does not exist.'.format(input_model_path))
    try:
        model = keras.models.load_model(input_model_path)
        return model
    except FileNotFoundError as err:
        logging.error('Input mode file (%s) does not exist.', FLAGS.input_model)
        raise err
    except ValueError as wrong_file_err:
        if input_json_path:
            if not Path(input_json_path).exists():
                raise FileNotFoundError(
                    'Model description json file `{}` does not exist.'.format(
                        input_json_path))
            try:
                model = model_from_json(open(str(input_json_path)).read())
                model.load_weights(input_model_path)
                return model
            except Exception as err:
                logging.error("Couldn't load model from json.")
                raise err
        elif input_yaml_path:
            if not Path(input_yaml_path).exists():
                raise FileNotFoundError(
                    'Model description yaml file `{}` does not exist.'.format(
                        input_yaml_path))
            try:
                model = model_from_yaml(open(str(input_yaml_path)).read())
                model.load_weights(input_model_path)
                return model
            except Exception as err:
                logging.error("Couldn't load model from yaml.")
                raise err
        else:
            logging.error(
                'Input file specified only holds the weights, and not '
                'the model definition. Save the model using '
                'model.save(filename.h5) which will contain the network '
                'architecture as well as its weights. '
                'If the model is saved using the '
                'model.save_weights(filename) function, either '
                'input_model_json or input_model_yaml flags should be set to '
                'to import the network architecture prior to loading the '
                'weights. \n'
                'Check the keras documentation for more details '
                '(https://keras.io/getting-started/faq/)')
            raise wrong_file_err

class ShapesConfig(Config):
    """Configuration for training on the toy shapes dataset.
    Derives from the base Config class and overrides values specific
    to the toy shapes dataset.
    """
    # Give the configuration a recognizable name
    NAME = "shapes"

    # Number of classes (including background)
    NUM_CLASSES = 1 + 14  # background + 15 object
    # Choose the number of GPU devices
    # os.environ['CUDA_VISIBLE_DEVICES'] = '0'

    # Use small images for faster training. Set the limits of the small side
    # the large side, and that determines the image shape.
    IMAGE_RESIZE_MODE = "square"
    IMAGE_MAX_DIM = 896

    RPN_ANCHOR_SCALES = (8 * 6, 16 * 6, 32 * 6, 64 * 6, 128 * 6)  # anchor side in pixels
    # RPN_ANCHOR_SCALES = (8*5, 16*5, 32*5, 64*5, 128*5)  # anchor side in pixels

    # Reduce training ROIs per image because the images are small and have
    # few objects. Aim to allow ROI sampling to pick 33% positive ROIs.
    TRAIN_ROIS_PER_IMAGE = 100

    # Use a small epoch since the data is simple
    # STEPS_PER_EPOCH = 1000
    STEPS_PER_EPOCH = 1000

    # use small validation steps since the epoch is small
    VALIDATION_STEPS = 25

def main(args):
    # If output_model path is relative and in cwd, make it absolute from root
    output_model = FLAGS.output_model
    if str(Path(output_model).parent) == '.':
        output_model = str((Path.cwd() / output_model))

    output_fld = Path(output_model).parent
    output_model_name = Path(output_model).name
    output_model_stem = Path(output_model).stem
    output_model_pbtxt_name = output_model_stem + '.pbtxt'

    # Create output directory if it does not exist
    Path(output_model).parent.mkdir(parents=True, exist_ok=True)

    if FLAGS.channels_first:
        K.set_image_data_format('channels_first')
    else:
        K.set_image_data_format('channels_last')

    # model = load_model(FLAGS.input_model, FLAGS.input_model_json, FLAGS.input_model_yaml)
    ##--------------------------------------------------------------------------------------#
    config = ShapesConfig()
    config.display()
    MODEL_DIR = r'E:\Desktop\Projects\Mask_RCNN-master\logs'
    model = modellib.MaskRCNN(mode="inference", config=config,\
                              model_dir=MODEL_DIR)
    model.load_weights(COCO_MODEL_PATH, by_name=True)#exclude=["mrcnn_class_logits", "mrcnn_bbox_fc",\
                                # "mrcnn_bbox", "mrcnn_mask"])
    # print(model.summary())
    # plot_model(model, to_file='model1.png', show_shapes=True)
    # model_json = model.to_json()
    # with open(r'./modle.json', 'w') as file:
    #     file.write(model_json)

    print('loaded model and saved json file')
    ##--------------------------------------------------------------------------------------#
    # TODO(amirabdi): Support networks with multiple inputs
    # orig_output_node_names = [node.op.name for node in model.outputs]
    orig_output_node_names = ['mrcnn_detection/Reshape_1', 'mrcnn_class/Softmax', 'mrcnn_bbox/Reshape',\
                              'mrcnn_mask/Sigmoid', 'ROI/packed_2', 'rpn_class/concat', 'rpn_bbox/concat']

    if FLAGS.output_nodes_prefix:
        num_output = len(orig_output_node_names)
        pred = [None] * num_output
        converted_output_node_names = [None] * num_output

        # Create dummy tf nodes to rename output
        for i in range(num_output):
            converted_output_node_names[i] = '{}{}'.format(
                FLAGS.output_nodes_prefix, i)
            pred[i] = tf.identity(model.outputs[i],
                                  name=converted_output_node_names[i])
    else:
        converted_output_node_names = orig_output_node_names
    logging.info('Converted output node names are: %s',
                 str(converted_output_node_names))

    sess = K.get_session()
    if FLAGS.output_meta_ckpt:
        saver = tf.train.Saver()
        saver.save(sess, str(output_fld / output_model_stem))

    if FLAGS.save_graph_def:
        tf.train.write_graph(sess.graph.as_graph_def(), str(output_fld),
                             output_model_pbtxt_name, as_text=True)
        logging.info('Saved the graph definition in ascii format at %s',
                     str(Path(output_fld) / output_model_pbtxt_name))

    if FLAGS.quantize:
        from tensorflow.tools.graph_transforms import TransformGraph
        transforms = ["quantize_weights", "quantize_nodes"]
        transformed_graph_def = TransformGraph(sess.graph.as_graph_def(), [],
                                               converted_output_node_names,
                                               transforms)
        constant_graph = graph_util.convert_variables_to_constants(
            sess,
            transformed_graph_def,
            converted_output_node_names)
    else:
        constant_graph = graph_util.convert_variables_to_constants(
            sess,
            sess.graph.as_graph_def(),
            converted_output_node_names)

    graph_io.write_graph(constant_graph, str(output_fld), output_model_name,
                         as_text=False)
    logging.info('Saved the freezed graph at %s',
                 str(Path(output_fld) / output_model_name))

if __name__ == "__main__":
    app.run(main)

load pb model

def load_detection_model(model):
    config = tf.ConfigProto()
    config.gpu_options.allow_growth = True
    detection_graph = tf.Graph()
    with detection_graph.as_default():
        od_graph_def = tf.GraphDef()
        with tf.gfile.GFile(model, 'rb') as fid:
            serialized_graph = fid.read()
            od_graph_def.ParseFromString(serialized_graph)
            tf.import_graph_def(od_graph_def, name='')
        input_image = tf.get_default_graph().get_tensor_by_name('input_image:0')
        input_image_meta = tf.get_default_graph().get_tensor_by_name('input_image_meta:0')
        input_anchors = tf.get_default_graph().get_tensor_by_name('input_anchors:0')
        detections = tf.get_default_graph().get_tensor_by_name('mrcnn_detection/Reshape_1:0')
        mrcnn_mask = tf.get_default_graph().get_tensor_by_name('mrcnn_mask/Sigmoid:0')
    sessd=tf.Session(config=config,graph=detection_graph)
    print('Loaded detection model from file "%s"' % model)
    return sessd, input_image, input_image_meta, input_anchors, detections, mrcnn_mask

sessd, input_image, input_image_meta, input_anchors, detections, mrcnn_mask = load_detection_model(model_path)
results = model.detect_pb([image], sessd, input_image, input_image_meta, input_anchors, detections, mrcnn_mask,verbose=1)

use model, add to mrcnn/model.py

    def detect_pb(self, images, sessd, input_image, input_image_meta, input_anchors, detections, mrcnn_mask, verbose=1):
        """Runs the detection pipeline.

        images: List of images, potentially of different sizes.

        Returns a list of dicts, one dict per image. The dict contains:
        rois: [N, (y1, x1, y2, x2)] detection bounding boxes
        class_ids: [N] int class IDs
        scores: [N] float probability scores for the class IDs
        masks: [H, W, N] instance binary masks
        """
        assert self.mode == "inference", "Create model in inference mode."
        assert len(
            images) == self.config.BATCH_SIZE, "len(images) must be equal to BATCH_SIZE"

        # if verbose:
        #     log("Processing {} images".format(len(images)))
        #     for image in images:
        #         log("image", image)

        # Mold inputs to format expected by the neural network
        molded_images, image_metas, windows = self.mold_inputs(images)

        # Validate image sizes
        # All images in a batch MUST be of the same size
        image_shape = molded_images[0].shape
        # print(image_shape, molded_images.shape)
        for g in molded_images[1:]:
            assert g.shape == image_shape,\
                "After resizing, all images must have the same size. Check IMAGE_RESIZE_MODE and image sizes."

        # Anchors
        anchors = self.get_anchors(image_shape)
        # Duplicate across the batch dimension because Keras requires it
        # TODO: can this be optimized to avoid duplicating the anchors?
        anchors = np.broadcast_to(anchors, (self.config.BATCH_SIZE,) + anchors.shape)

        # if verbose:
        #     log("molded_images", molded_images)
        #     log("image_metas", image_metas)
        #     log("anchors", anchors)
        # Run object detection
        # detections, _, _, mrcnn_mask, _, _, _ =\
        #     self.keras_model.predict([molded_images, image_metas, anchors], verbose=0)
        detectionsed, mrcnn_masked = sessd.run([detections, mrcnn_mask], feed_dict = {input_image: molded_images, \
                                                               input_image_meta: image_metas, \
                                                               input_anchors: anchors})
        mrcnn_masked = np.expand_dims(mrcnn_masked, 0)
        detections = np.array(detectionsed)
        mrcnn_mask = np.array(mrcnn_masked)
        # Process detections
        results = []
        for i, image in enumerate(images):
            xi = detections[i]
            yi = mrcnn_mask[i]
            moldedi = molded_images[i]
            windowsi = windows[i]
            final_rois, final_class_ids, final_scores, final_masks =\
                self.unmold_detections(detections[i], mrcnn_mask[i],
                                       image.shape, molded_images[i].shape,
                                       windows[i])
            results.append({
                "rois": final_rois,
                "class_ids": final_class_ids,
                "scores": final_scores,
                "masks": final_masks,
            })
        return results
jgerardsimcock commented 4 years ago

Now that we have our pb file and our pbtxt file what do we do? How can we deploy this as tf serving model?

Jeffin21 commented 3 years ago

@ArashHosseini got this error ModuleNotFoundError Traceback (most recent call last)

in () 19 from mrcnn import model as modellib 20 from mrcnn import utils ---> 21 from mrcnn import coco 22 K.clear_session() 23 K.set_learning_phase(0) /content/mrcnn/coco.py in () 46 import shutil 47 ---> 48 from config import Config 49 import utils 50 import model as modellib ModuleNotFoundError: No module named 'config'
ArashHosseini commented 3 years ago

@Jeffin21 the code is looking for this file pls re-parent the config file so the code can find the module

Jeffin21 commented 3 years ago

@ArashHosseini could you tell me where can i find the pipeline.config? i made a custom dataset and trained but i couldn't find the pipeline.config to generate the pbtxt file to loading to opencv dnn