michalfaber / tensorflow_Realtime_Multi-Person_Pose_Estimation

Multi-Person Pose Estimation project for Tensorflow 2.0 with a small and fast model based on MobilenetV3
Other
215 stars 65 forks source link

ConcatOp Error when predicting from an image #5

Closed JivanRoquet closed 4 years ago

JivanRoquet commented 4 years ago

When trying to get results using an image, an error happens.

Code:

pose_model = tf.keras.models.load_model('/storage')
pose_model.load_weights('/storage/weights.best.mobilenet.h5')

img_test = cv2.imread('/storage/whole-dresses/{}.jpg'.format(random.sample(images, 1)[0]))
img_test = np.expand_dims(img_test, axis=0)

print(img_test.shape)
# (1, 440, 275, 3)

pred = pose_model.predict(img_test)

The following error happens:

InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,32,55,34] vs. shape[1] = [1,96,28,28]

I don't understand how either [1,32,55,34] or [1,96,28,28] relate to the shape of the image I'm using. The test_pose_vgg example notebook mentions in the comments that the shape should be (1, width, height, channels) and img_test.shape confirms it.

Does the model only work with square images, or is there something that I'm missing?

JivanRoquet commented 4 years ago

Complement to stick even more to your example in test_pose_vgg.ipynb:

img_test = cv2.imread('/storage/whole-dresses/{}.jpg'.format(random.sample(images, 1)[0]))
img_test_padded, pad = pad_right_down_corner(img_test, stride, padValue) 
input_img = np.transpose(np.float32(img_test_padded[:,:,:,np.newaxis]), (3,0,1,2))

pred = pose_model.predict(input_img)

Still the exact same error.

JivanRoquet commented 4 years ago

Silly me, was following the test_pose_vgg.ipynb example while using MobileNet model...

When following MobileNet notebook example, all works well!

By the way, do you have any plan to make the vgg model/weights downloadable? There really seems to be a huge difference in quality/accuracy when reading through your example notebooks.