pnggroup / libpng

LIBPNG: Portable Network Graphics support, official libpng repository
http://libpng.sf.net
Other
1.25k stars 611 forks source link

16-bit channels, possible issue? #561

Closed eightbitastronomy closed 3 months ago

eightbitastronomy commented 3 months ago

Hello, I'm currently using version 1.6.40 and successfully producing png output with 8-bit channel depth in RGBA mode. However, when I alter my code to use 16-bit channel depth, the colors in the png are incorrect. I have found the problem (see below), but I'm unclear on whether this is (a) a coding error on my part (b) a misunderstanding of libpng on my part or (c) a 16-bit channel issue that somehow has never been noticed after all these years.

Here's what I have found:

Alternatively:

Lastly, calling identify in a shell on the output pngs confirms that they have the expected channel depth.

So, it would seem the issue is that even with 16-bit channels, only the upper 8 bits are being used. Also, this would mean that the alpha channel is in fact affected (but in my case is no problem showed up in the alpha simply because I have so far not used any meaningful levels of transparency).

My code is below. I have stripped it down to what's important so if there are any questions, please let me know. Any & all help is greatly appreciated!

pngptr = png_create_write_struct(PNG_LIBPNG_VER_STRING, errorptr, NULL, NULL);

infoptr = png_create_info_struct(pngptr);

setjmp(png_jmpbuf(pngptr));

png_init_io(pngptr, output);

/ only supporting on/off for compression at the moment / if (opts->visuals.compression == 0) { png_set_compression_level(pngptr, Z_NO_COMPRESSION); } else { png_set_compression_level(pngptr, Z_BEST_COMPRESSION); // if compile err, try include zlib.h
}

/ opts->visuals.depth can be either 8 or 16 at runtime / png_set_IHDR(pngptr, infoptr, opts->nwidth, opts->nheight, opts->visuals.depth, PNG_COLOR_TYPE_RGB_ALPHA, PNG_INTERLACE_NONE, PNG_COMPRESSION_TYPE_DEFAULT, PNG_FILTER_TYPE_DEFAULT);

/ the following did not make a difference:
sig_bit.red = opts->visuals.depth;
sig_bit.green = opts->visuals.depth;
sig_bit.blue = opts->visuals.depth;
sig_bit.alpha = opts->visuals.depth;
png_set_sBIT(pngptr, infoptr, &sig_bit);
/

/ next, text fields are set. Omitting because it's unimportant. /

/ Create a LCH color wheel. /

initialize_wheel(&colors, opts->visuals.colors->swatch_n, opts->visuals.colors->space, opts->visuals.colors->mode, opts->visuals.colors->swatch));

/ setting up data container for png use. Only showing 16-bit-channel code here. /

rows = malloc(sizeof(png_byte )opts->nheight); convertptr = convert_xyz_to_sRGB16; datasize = sizeof(BaseC16); swatchrgb = malloc(sizeof(datasize)); for (i=0; inheight; i++) rows[i] = malloc(sizeof(png_byte)opts->nwidthdatasize);

/ transpose input data for libpng while converting from black & white to color /

for (i=0; inheight; i++) { for (j=0; jnwidth; j++) { storevald = intensitymax == 0 ? 0.0 : (double)(canvas[j][i].n) / (double)(intensitymax); linear_by_intensity_norm(colors, storevald, &swatchI); convert_lch_to_lab(&swatchluv, (BaseI )swatchI); convert_lab_to_xyz(&swatchxyz, &swatchluv); convertptr(swatchrgb, &swatchxyz, max_uint16); / pretend the next call is to memcpy in string.h. i had my reasons for writing bytecopy / bytecopy(&(rows[opts->nheight-i-1][datasizej]), swatchrgb, datasize);
free((BaseI *)swatchI); } }

png_write_info(pngptr, infoptr); png_set_rows(pngptr, infoptr, rows);

/ Write/Output /

png_write_png(pngptr, infoptr, PNG_TRANSFORM_IDENTITY, NULL); png_write_image(pngptr, rows); png_write_end(pngptr, infoptr);

jbowler commented 3 months ago

Please attach the corresponding 8-bit-per-channel and 16-bit-per-channel images.

eightbitastronomy commented 3 months ago

The files are as follows (edited for the order in which the attachments appear, top to bottom):

Correct 8-bit image: mout8_8-to-8.png Incorrect 8-bit image, with 16-bit calculation but 8-bit output (from highest 8 bits): mout8_16-to-8.png Incorrect 16-bit image, with 16-bit calculation and 16-bit output: mout16_16-to-16.png

mout8_8-to-8 mout8_16-to-upper-8 mout16_16-to-16

jbowler commented 3 months ago

Then it's a bug in your code (which, incidentally, seems to have been trashed; read what you originally posted.)

Here is the first pixel of each file (just the RGB, I assume all the 'A' values are 255 or 65535), using hexadecimal to make it clear:

mout16_16-to-16 A325 0000 006B
mout8_8-to-8    0037 0000 006A
mout8_16-to-8   00A3 0000 0000

So libpng is writing the 16bit data correctly in mout16_16-to-16 because when you write just the top 8 bits in mout8_16-to-8 you end up with a PNG which does, indeed, have the top byte.

I don't have the faintest idea where you code comes up with 0xA325 when it should be around 0x3737 given that 0x6A gets smashed to 0x6B when it should be around 0x6A6A but the handling of individual pixels isn't consistent; the blue transition to red stops one pixel later in the correct image.

There's not enough of your code in the post to hazard a guess as to what is going on (for example the missing declarations of swatchluv and swatchxyz.)

Your output files also have data after IEND and that data seems to be some other weird PNG file with small IDAT chunks. The chunks are of variable size and libpng won't do that with the simple IO call you made.

@ctruta: not a libpng issue, also a bug report with no repro.

eightbitastronomy commented 3 months ago

I did indeed see that my original code was trashed, but I figured there was no way to run it without all the definitions, etc., that I didn't include... especially since the code above is a shared object making calls to a different shared library. Some of the items are intermediary and don't have any bearing on the problem. Most of the trashing comes from the asterisks leading to italicizing the text. In any case, sorry if that caused you any trouble.

For data in the bytes. I just double-checked the raw RGB data in 8 and 16 bits just before it's written to the png, and it checks out. It is what it should be, but doesn't match the hex code you've printed out.

Which brings me to the problem with the file after IEND.... The only other call I make to libpng functions is to png_set_text, right after I set up the png_text struct. This is the part I completely left out, above, "omitting because it's unimportant".

Thanks for your time.

jbowler commented 3 months ago

For data in the bytes. I just double-checked the raw RGB data in 8 and 16 bits just before it's written to the png, and it checks out. It is what it should be, but doesn't match the hex code you've printed out.

Ok, here is what is in the three PNG files: these are the first 6 (16-bit file) or 3 (the 8-bit files) bytes of the first (top) row:

mout16_16-to-16 A325 0000 006B
mout8_8-to-8    0025 0000 006A     # FIRST BYTE CORRECTED
mout8_16-to-8   00A3 0000 0000

UPDATE: I corrected the first byte in mout8_8-to-8; I had entered the decimal value, not the hexadecimal one.

Please append the corresponding bytes from the rowsvariable in your own code for each case (preferably in hex):

mout16_16-to-16 rows[0][0] rows[0][1] rows[0][2] rows[0][3] rows[0][4] rows[0][5]
mout8_8-to-8    rows[0][0] rows[0][1] rows[0][2]
mout8_16-to-8   rows[0][0] rows[0][1] rows[0][2]

Use the "<>" markup (see the top of the edit tab - the line with "H B I ..." on it to stop github reformatting the data.

jbowler commented 3 months ago

Application bug; the 16-bit data is byte swapped (the first byte is the least significant, not the most). This code proves the point:

#include <stdio.h>
#include <string.h>
#include <zlib.h>
#include <png.h>

int main(int argc, const char **argv) {
    png_structp pr = png_create_read_struct(PNG_LIBPNG_VER_STRING, 0, 0, 0);
    png_structp pw = png_create_write_struct(PNG_LIBPNG_VER_STRING, 0, 0, 0);
    png_infop   pi = png_create_info_struct(pr);
    unsigned int trr = PNG_TRANSFORM_SWAP_ENDIAN;
    unsigned int trw = PNG_TRANSFORM_IDENTITY;

    if (argc > 1 && strcmp(argv[1], "-w")) { /* swap on write */
        trr = PNG_TRANSFORM_IDENTITY;
        trw = PNG_TRANSFORM_SWAP_ENDIAN;
    }

    png_init_io(pr, stdin);
    png_init_io(pw, stdout);

    png_read_png(pr, pi, trr, 0);
    png_write_png(pw, pi, trw, 0);
    return 0;
}

Then:

$  fix16bit <mout16_16-to-16.png >mout16_16-to-16-swapped.png

This swaps the bytes on read and then uses png_write_png as one of the code alternatives posted originally does. This gives this 16-bit PNG file:

mout16_16-to-16-swapped

mout8-16-to-8.png contains the actual values from mout16-16-to-16.png divided by 257 and truncated (round towards 0 - the C integer implementation of division). This is the correct conversion from 16 to 8 bits (divide by 65535 multiple by 255) but since the values written into mout16-16-to-16.png were byte swapped the results are confusing. Note that the division by 257 means that the result isn't always the low order byte of the 16-bit quantity.

The fix16bit code supports a -w option which does the swap on write. Most likely the app is running on a little endian architecture.

eightbitastronomy commented 3 months ago

I'm not exaggerating when I say you just made my day. I gave up on this problem without any intention of returning to it. Although at one point I had tried reordering my channels, thinking that I misunderstood how the 16-bit process worked, I hadn't thought about the problem being a big-endian vs little-endian issue.

That you continued to put time and effort into this problem... Thank you.