matterport / Mask_RCNN

Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow
Other
24.57k stars 11.69k forks source link

How to convert Mask rcnn model to Tensorflow .pb #218

Open luoshanwei opened 6 years ago

luoshanwei commented 6 years ago

I want to run the mask rcnn on android , but I have not .pb file

ps48 commented 6 years ago

@luoshanwei You'll need to export the keras checkpoint to tensorflow .pb file and then use it.

ericj974 commented 6 years ago

@ps48 This will be possible once #167 is complete no (python layer not serializable) ?

ps48 commented 6 years ago

@ericj974 Yes, it is complete I had tried exporting the file. It works! just waiting for the merge.

Surmeh commented 6 years ago

Hello, Could you send me the script you used to convert the checkpoint to .pb? I tried using https://github.com/amir-abdi/keras_to_tensorflow/blob/master/keras_to_tensorflow.py but it gives "ValueError: No model found in config file."

ericj974 commented 6 years ago

@Surmeh Code here

Surmeh commented 6 years ago

Thanks alot @ericj974. What does "inference_config" refer to in here? def export(inference_config, train_log_dirpath, model_path = None):

ericj974 commented 6 years ago

I should have commented the function. It's a config instance used for inference (being related to the config used for training, albeit not exactly the same). Look at train_shapes.ipynb for an example of it.

whrenstone commented 6 years ago

@luoshanwei have you found the solution? i found a error that tell me that inputs are mismatch. Could you give me a suggestion or a sample code? Many thanks!

Cpruce commented 6 years ago

I can verify that https://github.com/amir-abdi/keras_to_tensorflow/blob/master/keras_to_tensorflow.py works for extracting the tensorflow model

Adriel-M commented 6 years ago

@Cpruce what are you feeding into that script? I'm trying to feed in the saved model from model.keras_model.save(path) and I'm getting hit with lambda t: tf.reshape(t, [tf.shape(t)[0], -1, 2]))(x) NameError: name 'tf' is not defined

Also I'm getting ValueError: Unknown layer: BatchNorm if I don't comment out the BatchNorm layers.

EDIT:

So I bypassed NameError: name 'tf' is not defined by converting the lambda function as a regular function and importing tensorflow. Now I need to figure out how to handle the custom layers (ProposalLayer, etc.)

Cpruce commented 6 years ago

python3 keras_to_tensorflow.py -input_model_file saved_model_mrcnn_eval -output_model_file model.pb -num_outputs=7

I do model.keras_model.save(path) in coco.py as well. I definitely hit the BatchNorm and, if I remember correctly, was not able to resolve the issue on my laptop (I mainly tried getting the dep versions up-to-date). Not a great answer but I think what worked for me was just doing it on my linux machine.

jmtatsch commented 6 years ago

@Cpruce where exactly in coco.py do you save the model? I tried after evaluate_coco in line 516 because the graph should be built up fully after that but keras_to_tensorflow complains that the architecture is not contained in the .h5 file. I also attempted to export the model after compile in model.py line 2083 but then I get deepcopy errors, maximum recursion depth exceeded.

Surmeh commented 6 years ago

Hi @Cpruce, we tried to extract the MaskRCNN weights to a model file, but failed to do so. Will you be able to share the model(.pb) for it, if you have successfully got it?

Cpruce commented 6 years ago

@jmtatsch I exported the model on line 516 as well. Are you doing model.keras_model.save(path)?

@Surmeh I tried zipping it but it's still too big. pb file size is 249M and github only allows 10M.

Surmeh commented 6 years ago

@Cpruce : Alright, thanks for trying.

Cpruce commented 6 years ago

@Surmeh can you please share the command you're running and the error?

ps48 commented 6 years ago

@Cpruce to have the exported file in github, we can upload it as a binary file in the next release (similar to model releases).

Cpruce commented 6 years ago

@ps48 sounds like a good idea 👍 Only downside is he won't be able to make changes and then save the model, though that may not be necessary for his use case

Surmeh commented 6 years ago

@ps48 It would be really great if you could upload the binary file. Also, when are you planning your next release?

Cpruce commented 6 years ago

@waleedka

jmtatsch commented 6 years ago

@Cpruce would you be so kind and look up which keras/tf version you are running? and if your export is still working?

pip3 show keras Version: 2.1.4

pip3 show tensorflow-gpu Version: 1.4.0

Cpruce commented 6 years ago

Sure thing:

pip show keras Name: Keras Version: 2.1.2

pip show tensorflow-gpu Name: tensorflow-gpu Version: 1.3.0

The export still works. Which OS are you using?

jmtatsch commented 6 years ago

@Cpruce I am running Ubuntu 16.04 now with your respective keras and tensorflow-gpu versions but keras_to_tensorflow.py is still unable to load_model() the exported model from the model.h5. Seems as if my keras model.save(path) is unable to save the whole model to h5. I will try to make a minimal example to narrow this down further.

Nevertheless how exactly do you trigger your model export? I use: python3 coco.py evaluate --dataset=$MSCOCO_DATASET --model=coco

Cpruce commented 6 years ago

@jmtatsch To produce h5:
python3 coco.py evaluate --dataset=$COCO_PATH --model=coco

To save model in coco.py: evaluate_coco(model, dataset_val, coco, "bbox", limit=int(args.limit)) model.keras_model.save("mrcnn_eval.h5")

Extracting pb from h5: python3 keras_to_tensorflow.py -input_model_file saved_model_mrcnn_eval.h5 -output_model_file model.pb -num_outputs=7

Could you paste the full stacktrace?

Surmeh commented 6 years ago

Hi @Cpruce I am running the following command: python3 coco.py evaluate --dataset=/home/surabhi/Tensorflow_Models/coco/val2014 --model=coco

Error: Traceback (most recent call last): File "coco.py", line 469, in model.load_weights(model_path, by_name=True) File "/home/surabhi/Tensorflow_Models/model.py", line 2037, in load_weights topology.load_weights_from_hdf5_group_by_name(f, layers) File "/home/surabhi/tensorflow/lib/python3.5/site-packages/keras/engine/topology.py", line 3260, in load_weights_from_hdf5_group_by_name ' element(s).') ValueError: Layer #9 (named "res2a_branch2b") expects 0 weight(s), but the saved weights have 2 element(s).

jmtatsch commented 6 years ago

@Cpruce Saving and reloading a minimal model works in the same workspace, starting to run out of ideas here. Here is my full stack trace:

python3 keras_to_tensorflow.py -input_model_file mrcnn_eval.h5 -output_model_file model.pb -num_outputs=7                              tatsch@knecht2
usage: keras_to_tensorflow.py [-h] [-input_fld INPUT_FLD]
                              [-output_fld OUTPUT_FLD]
                              [-input_model_file INPUT_MODEL_FILE]
                              [-output_model_file OUTPUT_MODEL_FILE]
                              [-output_graphdef_file OUTPUT_GRAPHDEF_FILE]
                              [-num_outputs NUM_OUTPUTS]
                              [-graph_def GRAPH_DEF]
                              [-output_node_prefix OUTPUT_NODE_PREFIX]
                              [-quantize QUANTIZE]
                              [-theano_backend THEANO_BACKEND] [-f F]

set input arguments

optional arguments:
  -h, --help            show this help message and exit
  -input_fld INPUT_FLD
  -output_fld OUTPUT_FLD
  -input_model_file INPUT_MODEL_FILE
  -output_model_file OUTPUT_MODEL_FILE
  -output_graphdef_file OUTPUT_GRAPHDEF_FILE
  -num_outputs NUM_OUTPUTS
  -graph_def GRAPH_DEF
  -output_node_prefix OUTPUT_NODE_PREFIX
  -quantize QUANTIZE
  -theano_backend THEANO_BACKEND
  -f F
input args:  Namespace(f=None, graph_def=False, input_fld='.', input_model_file='mrcnn_eval.h5', num_outputs=7, output_fld='', output_graphdef_file='model.ascii', output_model_file='model.pb', output_node_prefix='output_node', quantize=False, theano_backend=False)
/home/tatsch/.virtualenvs/maskrcnn/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Using TensorFlow backend.
Input file specified (mrcnn_eval.h5) only holds the weights, and not the model defenition.
    Save the model using mode.save(filename.h5) which will contain the network architecture
    as well as its weights. 
    If the model is saved using model.save_weights(filename.h5), the model architecture is 
    expected to be saved separately in a json format and loaded prior to loading the weights.
    Check the keras documentation for more details (https://keras.io/getting-started/faq/)
Traceback (most recent call last):
  File "keras_to_tensorflow.py", line 123, in <module>
    raise err
  File "keras_to_tensorflow.py", line 114, in <module>
    net_model = load_model(weight_file_path)
  File "/home/tatsch/.virtualenvs/maskrcnn/lib/python3.5/site-packages/keras/models.py", line 240, in load_model
    model = model_from_config(model_config, custom_objects=custom_objects)
  File "/home/tatsch/.virtualenvs/maskrcnn/lib/python3.5/site-packages/keras/models.py", line 314, in model_from_config
    return layer_module.deserialize(config, custom_objects=custom_objects)
  File "/home/tatsch/.virtualenvs/maskrcnn/lib/python3.5/site-packages/keras/layers/__init__.py", line 55, in deserialize
    printable_module_name='layer')
  File "/home/tatsch/.virtualenvs/maskrcnn/lib/python3.5/site-packages/keras/utils/generic_utils.py", line 140, in deserialize_keras_object
    list(custom_objects.items())))
  File "/home/tatsch/.virtualenvs/maskrcnn/lib/python3.5/site-packages/keras/engine/topology.py", line 2490, in from_config
    process_layer(layer_data)
  File "/home/tatsch/.virtualenvs/maskrcnn/lib/python3.5/site-packages/keras/engine/topology.py", line 2476, in process_layer
    custom_objects=custom_objects)
  File "/home/tatsch/.virtualenvs/maskrcnn/lib/python3.5/site-packages/keras/layers/__init__.py", line 55, in deserialize
    printable_module_name='layer')
  File "/home/tatsch/.virtualenvs/maskrcnn/lib/python3.5/site-packages/keras/utils/generic_utils.py", line 134, in deserialize_keras_object
    ': ' + class_name)
ValueError: Unknown layer: BatchNorm
Adriel-M commented 6 years ago

@jmtatsch take a look at: https://github.com/matterport/Mask_RCNN/issues/218#issuecomment-365069480

I think the issue here is it can't export custom layers.

Cpruce commented 6 years ago

@Surmeh can you load the weights from the original h5 file? @jmtatsch Im not sure if I've tried extracting the model outside of the workspace. Can you extract the pb there and then use the tf model somewhere else?

jmtatsch commented 6 years ago

@Cpruce Sorry by workspace I meant virtual environment. I also tried to run keras_to_tensorflow.py in the Mask_RCNN folder, without success. @Adriel-M So you commented all BatchNorm's in model.py? Wouldn't that mess up the results? Or did you refactor it to use KL.BatchNormalization again?

jmtatsch commented 6 years ago

Ok, I am stuck at the ProposalLayer now as well.

Surmeh commented 6 years ago

@Cpruce nope, I can't. Here is the code I'm running: MODEL_DIR = os.path.join(ROOT_DIR, "logs") model_path = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5") config = coco.CocoConfig() class InferenceConfig(config.class):

Set batch size to 1 since we'll be running inference on

# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
GPU_COUNT = 1
IMAGES_PER_GPU = 1
DETECTION_MIN_CONFIDENCE = 0

config = InferenceConfig() model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR,config=config) model.load_weights(model_path)

Error: Its the same error as before, in this the layer name is not printed. ValueError: You are trying to load a weight file containing 233 layers into a model with 1256 layers.

Cpruce commented 6 years ago

@jmtatsch what's wrong at the ProposalLayer? @Surmeh you're trying with a clean version of the latest master and mask_rcnn_coco.h5?

Surmeh commented 6 years ago

@Cpruce I've got the saved model using the coco.py after getting the clean version of master. Now I will try to convert it to .pb using the keras_to_tensorflow script.

chenyuZha commented 6 years ago

Hello, I finally converted .h5 to .pb with the code of @ericj974 . Now I tried to do the inferences with .pb, but I had some problems about how to process the input images to feed in placeholder.

As I know, you should give 2 tensors to feed in placeholder, which are input_image:0 and image_meta:0.

I'm a little confused about how to get the tensors correspond placeholder image_meta, and I check in the model.py , the function detect, but I still not get the point..

Has anyone get idea here?

Cpruce commented 6 years ago

@chenyuZha You can use this, right?

def compose_image_meta(image_id, image_shape, window, active_class_ids):
    """Takes attributes of an image and puts them in one 1D array.
    image_id: An int ID of the image. Useful for debugging.
    image_shape: [height, width, channels]
    window: (y1, x1, y2, x2) in pixels. The area of the image where the real
            image is (excluding the padding)
    active_class_ids: List of class_ids available in the dataset from which
        the image came. Useful if training on images from multiple datasets
        where not all classes are present in all datasets.
    """
    meta = np.array(
        [image_id] +            # size=1
        list(image_shape) +     # size=3
        list(window) +          # size=4 (y1, x1, y2, x2) in image cooredinates
        list(active_class_ids)  # size=num_classes
    )
    return meta
chenyuZha commented 6 years ago

@Cpruce yes, but I'm not very sure about the steps of processing. So in my opinion,

  1. Use function resize_image( in the script util)to get the parameters windows and molded_images.

2.Use function compose_image_meta to obtain the image_meta.

  1. So then the molded_images correspond to the placeholder input_image:0 and image_meta correspond to placeholder image_meta:0.
  2. sess.run to get nodes[detections, mrcnn_class, mrcnn_bbox, mrcnn_mask, rois, rpn_class, rpn_bbox] 5.run the function unmold_detections to obtain finally [rois,class_ids,scores,masks] as Keras did.

Please tell me if it's correct.Thanks

Cpruce commented 6 years ago

Yup, sounds correct to me. If you're implementing in another environment, you'll probably have to debug each step...

ashgolzar commented 6 years ago

@chenyuZha Even if you use the right inputs, you will get an error due to tf.py_func in DetectionLayer.

"N.B. The tf.py_func() operation has the following known limitations:

The body of the function (i.e. func) will not be serialized in a GraphDef. Therefore, you should not use this function if you need to serialize your model and restore it in a different environment.

" so you need to rewrite DetectionLayer using tensors instead of np ndarray

Cpruce commented 6 years ago

@ashgolzar

Seems like you're not on the latest master https://github.com/matterport/Mask_RCNN/pull/167

ashgolzar commented 6 years ago

@Cpruce apparently not :D That's great :+1: I checked with the latest PR and it works fine.

Cpruce commented 6 years ago

@ashgolzar Awesome :)

ivshli commented 6 years ago

@ashgolzar you are using TF model in C++ environnement?

ashgolzar commented 6 years ago

@ivshli No I use Python.

joelteply commented 6 years ago

New here. Glad to see the progress. Let's see if we can get this working on C++. I'll try to help, also with reducing weight sizes. That's usually pretty straightforward. I have a shell script I can upload. My primary concern here, as it was with PSPNet, was with the BN nodes. That one also had a bunch of lamdas for the pyramiding.

fastlater commented 6 years ago

@ericj974 Thanks for the script. It does work. I just want to add: in case someone want to inspect the exported model and check the placeholder names . You can use import_pb_to_tensorboard.pyfrom here python import_pb_to_tensorboard.py --model_dir=model/ownmodel.pb --log_dir=logsFolder

liangbo-1 commented 6 years ago

@fastlater I've trained the model and saved it into the.H5 file. and then ,How should I use @ericj974's script to get the .pb file. What do I need to change in the script? thank you!

fastlater commented 6 years ago

What do I need to change in the script?

@liangbo-1 Nothing. Did you try it? Did you get error? I am inspecting the exported model right now.

liangbo-1 commented 6 years ago

@fastlater sorry, I don't know how to run the script and how to add my model ? Can you give me some specific guidance?

liangbo-1 commented 6 years ago

@chenyuZha @ericj974 @fastlater Did you get the .pb file successfully with @ericj974's script? I have the following results, python3 export_model.py -input_model_file 'mask_rcnn_ec_0002.h5' -output_model_file 'model.pb' -graph_def=True /usr/local/lib/python3.5/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype fromfloattonp.floatingis deprecated. In future, it will be treated asnp.float64 == np.dtype(float).type. from ._conv import register_converters as _register_converters Using TensorFlow backend.

and Can you tell me how you did it?thank you

ypflll commented 6 years ago

@chenyuZha @joelteply I'm also working on using the .pb file in python or c++. It will be great help if you can share your code.