lvandeve / lodepng

PNG encoder and decoder in C and C++.
zlib License
2.06k stars 421 forks source link

16bit grayscale pngs are corrupted #74

Open tharindu-mathew opened 6 years ago

tharindu-mathew commented 6 years ago

If you write 16 bit grayscale pngs the output is corrupted. 8 bit seems to work fine.

lvandeve commented 6 years ago

Thank you for reporting this

Can you please provide a bit more info to help me debug this?

I'd be interested in: how does it break, what kind of corrpution do you get? Can you attach a corrupted image? How do you use lodepng: which function do you use with which parameters?

I tried creating a greyscale 16t-bit PNG myself and it worked correctly with the following code:

int main(int argc, char argv[]) { const char filename = argc > 1 ? argv[1] : "test.png"; unsigned width = 256, height = 256; std::vector image; image.resize(width height 2); // 2 bytes per pixel for(unsigned y = 0; y < height; y++) for(unsigned x = 0; x < width; x++) { image[2 width y + 2 x + 0] = y; // most significant image[2 width y + 2 x + 1] = x; // least significant } unsigned error = lodepng::encode(filename, image, width, height, LCT_GREY, 16); // note that the LCT_GREY and 16 parameters are of the std::vector we filled in, lodepng will choose its output format itself based on the colors it gets, it will choose 16-bit greyscale in this case though because of the pixel data we feed it if(error != 0) std::cout << "error: " << error << std::endl; }

Thanks!

hekkup commented 6 years ago

You might have problem with automatic bit depth adjustment. To disable that, find function lodepng_encoder_settings_init() in lodepng.cpp and change the line settings->auto_convert = 1; into settings->auto_convert = 0; This worked for me.

The encoder functions should have a parameter or some other way to enable/disable the bit depth adjustment. You wouldn't expect any adjustment to happen when the functions have parameters to explicitly set color mode and bit depth -- the functions should behave according to those parameters, in my opinion.

lvandeve commented 6 years ago

The parameters are the color type of the uncompressed pixels which you give (to the encoder) or want to get (from the decoder), which are independent of how it is stored in the compressed PNG storage format.

There are two views of PNG color types:

  1. the PNG file should literally preserve the color format you gave it, you want it to save it that way and want to load it that way without conversions. Advantages: you preserve a piece of information, what color format you chose, in the image. Disadvantages: A) it may compress less well, e.g. you give it RGB data but it's actually greyscale, or if you give it RGB data but there are less than 256 colors so it could use palette inside in theory. B) if you want to load a PNG from an external source and use it as e.g. texture data you would get it in some arbitrary format like RGB, grey, palette, RGBA depending on what it happened to be in the PNG file, and you have to write code handling a lot of different cases to convert it to a consistent format to be able to render it

  2. You completely decouple what color format you use in the uncompressed pixels, and what is used for compressed storage inside the PNG file. You give the encoder data in your format, it encodes it in the optimal way inside the PNG. When decoding, you request the pixels in the color format you want it, and no matter what color type was stored inside the PNG, the decoder converts it to your requested type

LodePNG chooses behavior 2 by default because it is more convenient and compresses better. However, the auto_convert setting of the encoder and color_convert setting of the decoder allow you to get behavior 1, so that is fully supported as well.

I'm not sure whether or not this is what was meant by the original post "corrupted" could also mean invalid/broken/unloadable

xieofxie commented 5 years ago

image[2 width y + 2 x + 0] = y; // most significant image[2 width y + 2 x + 1] = x; // least significant

I am wondering why [0] is used as the most significant part. For a little-endian machine, one can set uint16_t directly if [0] is treated as the least significant part.

DrSplinter commented 2 years ago

Hello, I have a problem with 16bit grayscale image as well. The problem is when pixel's upper and lower byte are the same. For example this works just fine:

std::vector<unsigned char> image = {253, 123};
std::vector<unsigned char> encoded, decoded;
unsigned int width, height;

ASSERT_EQ(0, lodepng::encode(encoded, image, 1, 1, LCT_GREY, 16));
ASSERT_EQ(0, lodepng::decode(decoded, width, height, encoded, LCT_GREY, 16)); 

ASSERT_EQ(vector, decoded);

However, if the 2 bytes are the same, its mysteriously encoded as 8bit.

std::vector<unsigned char> image = {253, 253};
std::vector<unsigned char> encoded, decoded;
unsigned int width, height;

lodepng::encode(encoded, image, 1, 1, LCT_GREY, 16);

ASSERT_EQ(56, lodepng::decode(decoded, width, height, encoded, LCT_GREY, 16));
ASSERT_EQ(0, lodepng::decode(decoded, width, height, encoded, LCT_GREY, 8));

The only way it works is with using lodepng::State, however, it took me few hours to figure it out. I prefer the above to work as one would expect without the conversion.

The working solution:

std::vector<unsigned char> image = {253, 253};
std::vector<unsigned char> encoded, decoded;
unsigned int width, height;
lodepng::State state;

state.info_raw.bitdepth = 16;
state.info_raw.colortype = LCT_GREY;
state.info_png.color.bitdepth = 16;
state.info_png.color.colortype = LCT_GREY;
state.encoder.auto_convert = 0;

ASSERT_EQ(0, lodepng::encode(encoded, image, 1, 1, state));
ASSERT_EQ(0, lodepng::decode(decoded, width, height, state, encoded)); 

ASSERT_EQ(vector, decoded);