I see that in the code, is present a function to bias selection toward high image gradients.
g = self.__image_gradient(images)
x = torch.randint(1, w-1, size=[n, 3*patches_per_image], device="cuda")
y = torch.randint(1, h-1, size=[n, 3*patches_per_image], device="cuda")
coords = torch.stack([x, y], dim=-1).float()
g = altcorr.patchify(g[0,:,None], coords, 0).view(n, 3 * patches_per_image)
ix = torch.argsort(g, dim=1)
x = torch.gather(x, 1, ix[:, -patches_per_image:])
y = torch.gather(y, 1, ix[:, -patches_per_image:])
Activating this part makes the training unstable and in general worser.
Did you have an explanation to why image gradient bias doesn’t work? from the paper I haven’t found any decisive conclusion.
I see that in the code, is present a function to bias selection toward high image gradients.
Activating this part makes the training unstable and in general worser. Did you have an explanation to why image gradient bias doesn’t work? from the paper I haven’t found any decisive conclusion.