Open cmfcmf opened 9 years ago
Is there an appropriate way to calculate the quality relative to the image size?
Don't know the algorithm, or what the 'quality' setting means. Should I be picking something like 1% of the image area?
In fact, the library load the image's pixels line by line, and the quality settings simply define a number of pixels to skip. So with a quality of 8 for example, Color Thief will only load and analyze one pixel in eight on each line (pixels at coordinates (0,0),(0,8)(0,16)...,(1,0)(1,8)(1,16)...). See here for details.
My main problem with using the relative quality as it is right now is that it is bad on tiny images and consumes a huge amount of memory and time on big images.
If I use a fixed relative quality, like 100, and use it on a tiny image of like 100x100, the result will be pretty bad. Whereas when I use it on a big image of like 3000x3000, the result will be pretty good but it'll take much memory and computation time.
If I could use an absolute value like 10000, it would analyze every pixel of the tiny image. For the big image, it should calculate 3000*3000/10000=900 and therefore analyze every 900th pixel => The result will still be pretty good but the memory usage will be okay as well.
I agree that it would be better if this argument was absolute. Also I am not convinced that this method for pixel sampling is ideal to obtain a representative sample. Instead of skipping pixels only on the width, I think it would be better to take a pixel from a square area.
But until now I have always tried to keep the same implementation as the original Javascript version, in order to have the same results. By changing the behavior of the quality factor, the implementation and the resulting dominant colors will become differents.
That said the Javascript version seems less and less maintained. So I hesitate between keeping the same implementation as well as defects, or breaking compatibility by providing profound improvements.
What are your opinions on this dilemma ?
I don't know, it's up to you. I would go for changing and improving your library. :smiley:
Am trying to calculate a relative quality based on image size:
public static function color_thief_quality($image)
{
return (int) round(sqrt((float) ($image->getImageHeight() * $image->getImageWidth()) * (0.02)));
}
Trying set the quality to 2% of the width of a square image, but am not sure if this is an obvious or helpful algorithm.
What are the implications of resizing the image prior to passing to color thief? Does that sound like a simpler way to manage the memory usage of histogram acquisition?
The results will be differents, because the image will loose some color details (See #9). And I think the computation time to resize the image may be longer than simply skip some pixels.
I'm currently also analyzing the image size first and calculating the $quality parameter based on that:
$quality = ceil(($width * $height) / 10000)
But it would be handy if the library would do so for use, because I can't really think of a use case of the current relative $quality.
It would be great if the $quality could also be absolute. I'd like to set it to a static value like 10000 meaning it would take the same time and memory regardless of picture size.