Closed garykuok closed 3 years ago
This is the follow up question from #453
Solved by expanding dimension on the third axis. A new problem arises is, the plotted image seems to show in inverted way. I get background colorised as green, and the object itself is white. Here's my predict function:` def_predict_fn(x):
##image is converted back to grayscale of shape (size, x, y), before converting is (size, x, y, 3)
x = rgb2gray(x)
return model.predict(np.expand_dims(x, axis=3))
The calculated probabilities are same for all samples. When doing get_image_and_mask, setting positive_only=True and hide_rest=True, I get the white object as well. Does this means my implementation is correct? Original image is black object and white background only.
Solved the last problem by tuning the kernel_size of segmentation algorithm.
Hello, I am also using grayscale image for lime explain_instance function. I'm passing the image as shape of (x, y). Now I encounter a new problem. If I set any 'size' of num_samples>1, when passing it to classifier_fn, it will have shape of ('size', x, y, 3). In this case, I can't simply reshape it back to grayscale image of shape (x, y, 1) because of the 'size'. If I set num_samples=1, I can expand it to (1, x, y, 1) and do prediction as usual (this is the dimension my classifier can read). How is this num_samples work in the classifier? Or should I just omit the 'size' and always return the same result in the loop?