rdagger / micropython-ili9341

MicroPython ILI9341Display & XPT2046 Touch Screen Driver
MIT License
185 stars 38 forks source link

RGB565 conversion #24

Closed codinglynpan closed 6 months ago

codinglynpan commented 7 months ago

In this python code: img2rgb565.py I am puzzled why the RGB conversion is like this:

        #r = (pix[0] >> 3) & 0x1F
        #g = (pix[1] >> 2) & 0x3F
        #b = (pix[2] >> 3) & 0x1F

Is this a typo? I changed the code to this and the colors look way much better on my CYD display.

        r = (pix[0] >> 3) & 0x1F
        g = (pix[1] >> 3) & 0x1F
        b = (pix[2] >> 3) & 0x1F
rdagger commented 7 months ago

It is not a typo. Your method reduces the color depth by an additional 1 bit from 16 to 15 bit.

The RGB565 format gives the green channel an extra bit compared to the red and blue channels because the human eye is more sensitive to variations in green than it is to variations in red and blue. This additional bit for the green channel allows for a finer gradation of green shades, which results in a better overall image quality, given the limited 16-bit color depth.

However, I see how your approach could provide a more subjectively pleasing color representation by decreasing the color depth in a more uniform manner.

My utility is a very simple entry level conversion from RGB888 to RGB565. There can be color distortion any time you reduce the color depth.

There are more advanced approaches:

  1. Adaptive Color Palettes: Advanced methods could involve generating custom color palettes for each image, based on color frequency analysis or clustering (e.g., using k-means on color data) to optimize the color representation.
    1. Perceptual Quantization: Techniques that take into account human vision characteristics, such as perceptual uniformity and color vision deficiencies, to ensure the quantized image looks as close as possible to the original when viewed by the human eye.
  2. Hybrid Methods: Combining multiple techniques, such as adaptive dithering where the dithering algorithm changes based on local image characteristics or incorporating machine learning models to predict optimal quantization and dithering strategies for each image.

These advanced approaches are outside the scope of this project. However, if I have the time, I could probably use NumPy to incorporate some gamma correction and add a switch for uniform conversions.