Open hariharans29 opened 5 years ago
Oh, thank u so much!!! It's really important since if there are no detections the onnxruntime will crash.
We can do nothing to prevent this since it caused by model itself, caused onnxruntime inside segmentation fault.
Working with this model I faced this particular problem. My workaround here is to resize the input image to fit 1088x800 and pad the input array with non-zero values.
Here is my preprocessing code:
# img is a PIL Image instance
scale = min(800 / img.height , 1088 / img.width)
w = round(img.width * scale)
h = round(img.height * scale)
img = image.resize((w, h), Image.BILINEAR)
# Convert to BGR
image = np.array(img)[:, :, [2, 1, 0]].astype(np.float32)
# HWC -> CHW
image = np.transpose(image, [2, 0, 1])
# Normalize
mean_vec = np.array([102.9801, 115.9465, 122.7717])
for i in range(image.shape[0]):
image[i, :, :] = image[i, :, :] - mean_vec[i]
# Pading with -mean_vec values gives the same effect as we fill with black color
p = [np.full((800, 1088), v) for v in -mean_vec]
padded_image = np.array(p, dtype=np.float32)
padded_image[:, : image.shape[1], : image.shape[2]] = image
@BowenBao : Based on the discussion in https://github.com/microsoft/onnxruntime/issues/1670 I am assigning this to you. Please update the Mask RCNN model or the documentation stating this model needs real image input
Takeaway from https://github.com/microsoft/onnxruntime/issues/1670
CC: @BowenBao