Open cklat opened 6 years ago
I struggle with the identical problem and would appreciate any solution to it ;-)
Note if I want to reuse the model in the callback and execute
image = dataset_train.load_image(image_id)
model.mode = "inference"
results = model.detect([image], verbose=1)
I get the following error but I don't know what does it mean / how to fix it.?
Processing 1 images
image shape: (1024, 1024, 3) min: 26.00000 max: 255.00000 uint8
molded_images shape: (1, 1088, 1088, 3) min: -48.22000 max: 215.44000 float64
image_metas shape: (1, 14) min: 0.00000 max: 1088.00000 int32
anchors shape: (1, 295647, 4) min: -0.08327 max: 1.02439 float32
Traceback (most recent call last):
File "...\circles.py", line 550, in <module>
train(model, args.dataset, args.subset)
File "...\circles.py", line 340, in train
custom_callbacks=[tensorboard, predictOnImgsCB]) # run on another cmd with venv: python -m tensorboard.main --logdir=logs/
File "...\model.py", line 2381, in train
use_multiprocessing=True,
File "...\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "...\site-packages\keras\engine\training.py", line 1426, in fit_generator
initial_epoch=initial_epoch)
File "...\site-packages\keras\engine\training_generator.py", line 229, in fit_generator
callbacks.on_epoch_end(epoch, epoch_logs)
File "...\site-packages\keras\callbacks.py", line 77, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "...\circles.py", line 308, in predictOnImgs
results = model.detect([image], verbose=1)
File "...\model.py", line 2531, in detect
self.keras_model.predict([molded_images, image_metas, anchors], verbose=0)
File "...\site-packages\keras\engine\training.py", line 1152, in predict
x, _, _ = self._standardize_user_data(x)
File "...\site-packages\keras\engine\training.py", line 754, in _standardize_user_data
exception_prefix='input')
File "...\site-packages\keras\engine\training_utils.py", line 100, in standardize_input_data
str(len(data)) + ' arrays: ' + str(data)[:200] + '...')
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 7 array(s), but instead got the following list of 3 arrays: [array([[[[-43.53, -39.56, -48.22],
[-43.53, -39.56, -48.22],
[-43.53, -39.56, -48.22],
...,
[-43.53, -39.56, -48.22],
[-43.53, -39.56, -48.22],
[...
model.mode = "inference"
It won't work with the model already loaded in "training" mode.
To fix, reload the model in "inference" mode and load your trained weights.
# Create model object in inference mode.
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
SPACENET_MODEL = 'Mask_RCNN/logs/spacenet20181211T1135/mask_rcnn_spacenet_0010.h5'
model.load_weights(SPACENET_MODEL, by_name=True)
I have written a callback to evaluate the trained model with a custom metric on some validation data. However, my approach is really naive and probably memory-consuming (getting OOM after some epochs??), as I create a new Mask RCNN model and load the model in inference mode with a validation config and the last checkpoint file that is created after each epoch has finished. So I'm basically having 2 models run at the end of each epoch - the one that is training and another to validate the one that is training. I wonder, if I can use the model, that I'm training, to do validation without setting up a new model. But I'm not sure how to do this. Can anybody give me a hint? It should work since val_loss, val_rpn_loss and so forth is calculated, for which the model should be loaded somehow in inference mode. Unfortunately, I'm having trouble to get my head around the important code snippets.