Open DonaldTsang opened 6 years ago
Hi @DonaldTsang, From my experience with this library :
Since the underlying algorithm works with luminance averages in regions, you could visualize it as looking at two images while squinting. What is "the same" image here means spatially-wise, not "featuring the same contents".
Have a nice day
@Lucassifoni let's see how this could be fixed
I feel that the rotation/color inversion cases could be handled outside this library.
Conceptually, an inverted image's hash is the complement of the original's hash because of the luminance sampling technique, so you could test for inversion with count_bits(~hash1 ^ hash2) < n
where n
is your target.
Rotation, the same way, could be handled by rotating the hash itself, since the hash translates to a spatial representation :
abcd1234efgh5678
-> 2d
a b c d
1 2 3 4
e f g h
5 6 7 8
-> rotate 90° clockwise
5 e 1 a
6 f 2 b
7 g 3 c
8 h 4 d
-> 1d
5e1a6f2b7g3c8h4d
But this feels beyond the scope of this library, since the question it tries to solve is "are those two images roughly the same", and an inverted or rotated image is visually not the same as the original...
Would like to ask if imagehash can handle these types of edited images
Goal: Update on https://github.com/pippy360/transformationInvariantImageSearch
Comparison: https://github.com/kennethrapp/phasher