Open VinCal opened 5 years ago
ImageMagick uses the C
API
of the exr library. This only supports reading ImfHalf
pixels where we get an unsigned short
value per channel. And this means that the image is being read at 16 bit even though you get a 32 bit channel value in the HDRI build.
And for your other issue it would be nice if you could share an example image so I can try to reproduce this.
Thanks for the quick reply! I do have a follow up question, regarding the Depth property. I have attached the example images for this test too: bitdepth_test.zip.
//var float16 = new MagickImage(@"E:\magick\exr\bitdepth_test\16.tif");
//var float16 = new MagickImage(@"E:\magick\exr\bitdepth_test\32.hdr");
var float16 = new MagickImage(@"E:\magick\exr\bitdepth_test\32.exr");
float16.Depth.Dump(); // 16
DumpPixelInfo(float16, 20, 20);
var tiffWriteDefines = new TiffWriteDefines { Alpha = TiffAlpha.Unspecified };
float16.Write(@"E:\magick\exr\bitdepth_test\out_16.tif", tiffWriteDefines);
var int16 = new MagickImage(@"E:\magick\exr\bitdepth_test\out_16.tif");
DumpPixelInfo(int16, 20,20);
void DumpPixelInfo(IMagickImage image, int x, int y)
{
IPixelCollection pixels = image.GetPixels();
Pixel pixel = pixels.GetPixel(x, y);
float R = pixel.GetChannel(0);
float G = pixel.GetChannel(1);
float B = pixel.GetChannel(2);
}
The output is the following, for all 3 different input images: 16.tif: R: 165885.5 G: 68478.95 B: 18367.72 32.hdr: R: 164861.5 G: 67582.97 B: 17407.73 32.exr: R: 165885.5 G: 68478.95 B: 18367.72
All 3 of those give me a value of 16 when I query the Depth property, for the half float .tif this makes sense of course. The .hdr I assume is the same reason as the .exr? The problem is when I save this image as a tif, it converts it to a 16bit int image, which clamps all the values > 1. So I end up with a tif with the following values: R: 65535 G: 65535 B: 18368 (slightly different values for the .hdr one, I assume the range is different).
I don't mind that some of these formats (hdr, exr) are converted to half floats, I also wouldn't mind them being saved as 32bit float tifs. But how do I determine whether an image has HDR values, so I can set the BitDepth to 32 so it's saved as a 32bit float image? Or even better how would I know it was a 32bit image to begin with, so I can save it as 32bit, regardless of HDR values or not. Or is there a better way all together? :)
A 32bit tif has a correct depth of 32, so it gets saved correctly too.
For the other issue I have attached 'color_type.zip', which contains the output of this snippet: color_type.zip
var image = new MagickImage(@"E:\magick\exr\bug.exr");
var tiffWriteDefines = new TiffWriteDefines { Alpha = TiffAlpha.Unspecified };
image.Write(@"E:\magick\exr\out_bug.tif", tiffWriteDefines);
if (image.ColorType == ColorType.TrueColorAlpha)
{
image.ColorType = ColorType.TrueColorAlpha;
}
image.Write(@"E:\magick\exr\out_bug_color_type.tif", tiffWriteDefines);
i have requirement to process only 32bit CMYK tiff images. So i am using condition of ColorSpace and BitDepth( ). but for 32bit CMYK tiff images, magick.net returns bitDepth() has 16.
Even the mouse hover on the MagickImage variable Shows 8-bit CMYK Photoshop correctly shows in , Image -> Mode => CMYK Color and 8 Bits / Channel.
Why this difference. How can process only 32bit CMYK images.
magick.net version : 7.16.1
What happens when you use the Depth
property instead @manjunthcm? And not sure how your question is related to this issue. Next time it might be better to start a new discussion instead. Adn are you using the Q8 version of Magick.NET?
hi dlemstra Sorry i used this discussion since the subject read incorrect bitdepth. Apologies if this is source of confusion.
Yes, you are right, Depth gives 8, while BitDepth gives 16
I am using Magick.Net-Q16-AnyCPU version 7.16.1
So based on above observation should we use depath instead of bitDepth( ) ?
That is the correct assumption.
i feel there is issue in bitdepth() function, it is not consistent in its return values. i have observed it returns sometimes 8 and sometimes 16 for a 8-bit cmyk tiff image
It is not consistent because you are using the Q16 version of Magick.NET. It will calculate the minimum depth that is required for one of the specified channels. When it can fit all the channels in 8 bit it will return 8 but when it is not possible to do that it will return the maximum value (16).
bug.zip
Prerequisites
Description
I have a 32bit exr file with values [0-1]. When opened in Photoshop it says 32 bit / channel under Image - Mode.
When loading this image with Magick.NET-Q16-HDRI-AnyCPU the Depth property or BitDepth method both return 16. When I attempt to save this file as a .tif it saves a 16bit tif.
I can save a correct looking 32bit tiff by first setting the depth to 32 and then saving it again.
The ColorType is set to TrueColorAlpha, but when I set it to TrueColorAlpha again the pixel values in the 32bit tiff (or 16bit) have changed. I don't understand why this is happening? I set TrueColorAlpha because otherwise single channel true color images are saved as 8-bit indexed as reported here: https://github.com/dlemstra/Magick.NET/issues/376.
Steps to Reproduce
ColorType issues:
System Configuration