Open peroveh opened 1 year ago
Hi, thanks for your interest about our dataset 😁
Re 2 above. When average blacklevel is subtracted and data are clipped (no negatives allowed) it means that it is impossible to avoid a bias, even when averaging over large black areas. The sensor noise process will allways have a distribution around a mean value, even if the mean value is 0 (photon shot noise, thermal noise or other)... if the recorded value saturates at 0 (the mean value for 0 light), the remaining part of the distribution,which now only have positive values, will accumulate to a positive value... i.e. you get an dark offset. Since the r,g,and b sensor values are scaled and combined in the color matrix, the clipping bias will be differently scaled giving a "colored" offset. For pipeline processing it is therefore best to have an offset on the digitized data (not bias created by the clipping), and remove the total effect of the offset data at the end of the pipeline. 3) Do you have information about the illumination equipment that the photographer used (I guess in addition to the room ceiling lights?) 4) Yes, we also considered noise/possibly the above discussed bias from noise, to be the source. However, detailed inspection of the black parts of the ladder shows no traces of this greenish color (and are darker than the green areas).. Also it appears that different parts of the ladder have different levels of green. Could it be that there is a green light there /a green backdrop,, a window with sun illuminated trees? Wrt shuttertime/exposure levels... I realize it is tricky to get sufficient dynamics in the recording to have un-saturated highlights as well as sufficient level to have good SNR at the normally illuminated parts. It does however appear that the images could be a bit higher.. For example, the max pixel value, with both lights , was below 255 (8 bit).. I would assume the sensor is at least 10 bit? So... maybe a 6-12 db higher gain should have been used? (2-4 times longer exposure)
I think I know a little bit about what you're talking about now. What this means is that if I subtract the black level, the dark values will converge to zero, and the positive values due to some noise will scale and accumulate, resulting in a non-black, biased color. Is this right? But as far as I know, if you don't subtract the black level, the raw data is not linear, and applying a diagonal color correction matrix to data with offsets will result in different chromatic adaptation depending on the intensity of the pixels. Also, subtracting _1 image from _12 images is rendering an image with light 2 on, and I think it's correct to clip it so that there are no negative values. Also, in my pipeline, the important thing is to find the chromatic information of the lights within the color chart, and I don't think the bias around the zero value you mentioned is significant in a pipeline that determines the GT chromaticities and the mixture maps about each illuminants.
I'll check about it and reply comment again about the detail information about the light source.
I was wondering if the image you visualized was an sRGB image or a RAW image? If it's an sRGB image, I think the image may have been distorted by various post-processing. And in response to your point about the right brightness, I think you're right, but in the process of shooting a large dataset, there were probably some limitations to getting every scene at the right exposure. Thanks for pointing that out.
Hi... I appreciate your time and answers.. 2) Subtracting dark level is removing the dark level (estimate) average.. Since any measurement has a noise distribution on both sides of this mean, a dark scene should have both positive and negative. values.. These should be retained.. In many cameras this is done by shifting the AD up by a small known value so that there is an DC offset value for each pixel. This information is then supplied in the DNG file as a non-zero offset. As you point out this would have given a error in computation of XYZ etc, so it should be removed... This can either be done by converting to a signed numeric format (permitting negative values) OR by using the liearity of the XYZ convolution to compute and subtract the component corresponding to the offset. The key is to retain both sides of the noise distribution... Otherwise you will create a bias from the noise variance. In subtracting the two images, the biass accumulates, as you add only the positive noise distribution for both images.. .If done properly, with signed (or the alternative handling) you would retain 0 average, but an increase in noise variance (as the sum of both variances). Since the chromatic information is obtained by averaging many pixels within a patch, the signal to noise ratio (effective bits) will increase by 1 (6 dB) for each factor of 2 in number.. So,,, you you approach an accuracy where even the small offset caused by improper saturation handling will matter.. 4) The displayed images are both in sRGB format obtained by processing raw with 6500 K whitepoint (obtained from gray patch 2, chart 2,,not chart 0 as previously said).. So, using a proper linear cam->XYZ transform followed by Bradford XYZpatch->XYZd65 (Given patch averate XYZ and D65XYZ as target), a transform from XYZd65 to sRGB, see Bruce Lindblom , a proper gamma function... and saved as png.
Thank you for your expert knowledge and in depth pointers on low level photography.
Also,
Hi again Did you get some more information about the additional photo lamps that the photographer used... Or was it possibly camera or external flash?
I have a few questions for which i have not found any documentation. 1) The main sensor images (108 MPix I assume) is only represented as 12 Mpix. Is this the highest resolution available as raw from samsung? Do they have a description on how the DNG raw is obtained from the original data? 2) The raw data appears to be clipped at 0... with 0 offset. This makes it difficult to avoid color bias in dark regions (as the negative values are now, i think, pre-clipped and unknown). 3) Scene illumination: Do you have any information about the room and extra illuminant (2, and 3) in the scenes? What types of illumination (FL, incandescent, LED, other..)? I understand that one of the extra lights are used on many scenes. Any spec on this? CCT, spectrum, ?? 4) I have realized an explicit pipeline using the linear Color matrices together with Bradford transform. One scene Place71, i noticed that the difference image (img12 -img_1) looks gray ladder steps. I am a bit surprised by this, as the ladder looks perfectly partially green in the img_1 (sun alone) version. All tests done using second white (gray) patch in chart 0 for WB. I do not understand this large tinge...