Closed iboraham closed 2 years ago
Hi! You see this incorrect output because the input to the model requires values in the range of [0, 255]. If you simply remove the dtype
argument from the function decode_image
, then this solves the issue! Another point you should consider is that the model ideally receives images of the size it was trained on. For the SALICON version, this is (240, 320).
imported = tf.saved_model.load("./converted_model")
imported = imported.signatures["serving_default"]
img = tf.io.read_file("face.jpg")
tensor = tf.io.decode_image(img, channels=3)
inference_shape = (240, 320)
original_shape = tensor.shape[:2]
input_tensor = tf.expand_dims(tensor, axis=0)
input_tensor = tf.image.resize(input_tensor, inference_shape,
preserve_aspect_ratio=True)
saliency = imported(input_tensor)["output"]
saliency = tf.image.resize(saliency, original_shape)
That worked, thanks a lot!
Hi
I'm trying to test your pretrained model on SALICON with my images. I used #15 this method to convert tf1 model to tf2 however, output seems not correct to me.
Here is my code,
Here is the input image
Here is the output image
Thanks!