If I divide an image into a small number of superpixels/segments (let's say 3 or 4), the usual number of samples used for generating the explanation (say 200~1000) will probably mean the same combinations of perturbed segments will occur more than once, during the generation of the perturbed distribution.
Say two of my perturbed images have superpixels 1 and 3 turned off and the rest turned on.
Are these repeats a problem? Do they impact the explanation weights for each segment?
They do impact the explanation weights, but I don't see why repeats would present any sort of problem. Their effect is similar to having two very similar perturbed samples with the same label.
If I divide an image into a small number of superpixels/segments (let's say 3 or 4), the usual number of samples used for generating the explanation (say 200~1000) will probably mean the same combinations of perturbed segments will occur more than once, during the generation of the perturbed distribution.
Say two of my perturbed images have superpixels 1 and 3 turned off and the rest turned on.
Are these repeats a problem? Do they impact the explanation weights for each segment?
Thanks for any help/clarification on this.