Closed plumbojumbo closed 8 years ago
Is there any experience regarding dHash accuracy on 8 x 8 images vs. 9 x 8/9?
No. I didn't do extensive research on this topic and 9x9 was used in the reference algorithm.
I'll try to run both proposed solutions against my data set to see if there's a difference in quality, but my gut feeling tells me that 2 is probably best, because it doesn't change the algorithm.
Anyway, thanks for contributing!
The implementation of
+ [NSBitmapImageRep(CocoaImageHashing) imageRepFrom:scaledToWidth:scaledToHeight:usingInterpolation:]
calls- [NSBitmapImageRep initWithBitmapDataPlanes:pixelsWide:pixelsHigh:bitsPerSample:samplesPerPixel:hasAlpha:isPlanar:colorSpaceName:bytesPerRow:bitsPerPixel:]
passing 0 tobytesPerRow
. FromNSBitmapImageRep
's documentation:The 36 bytes scan line of a 9 x 9 image does not fall on these boundaries, so it is zero-padded to apparently 64 bytes. The dHash implementation, however, assumes data to be contiguous. In consequence, out of 324 bytes or 81 pixels of image data, only the first 176 bytes or 44 pixels are currently really used to calculate the hash.
Possible fixes:
bytesPerRow
, i. e.width * 4
(same approach as in the iOS version, but possibly also aligning image data less optimally).bytesPerRow
to 64 bytes and skip the padding in the hash calculation.Implementations for 2 and 3 are here and here.
Is there any experience regarding dHash accuracy on 8 x 8 images vs. 9 x 8/9?