raspberrypi / picamera2

New libcamera based python library
BSD 2-Clause "Simplified" License
898 stars 190 forks source link

[OTHER] Colour Correction #691

Open Bra1nsen opened 1 year ago

Bra1nsen commented 1 year ago

Iam still interested in estimating Solar Irradiance with a Raspberry All Sky Imager.

I would like to know, if the color calibration was already evaluated? How precise is the sRGB colour spaced projected?

I know the solar spectral distribution:

image

The following Diagram shows the sRGB field:

image

Its possible to calculate the wavelengths corrections for the sRGB colorspace from CIE 1931 Space:

image

So when we capture images with the raspberry hq cam, the images should be sRGB or? Since then it should be possible to estimate the irradiance for the visible spectrum with a physical model.

Iam just interested in learning more about the color calibration described on raspberry pi tuning guide, and if its already evaluated.

davidplowman commented 1 year ago

Hi again, so the 2x2 binned camera modes will give you 1080p30 no problem. You'll have to stop and switch camera mode to capture a full resolution image, and then switch back again to continue the video recording. Hope that makes sense!

Pepslee commented 1 year ago

I use this way, that you describe. But switching config to change resolution freezes video stream for a second. So I prefer to use raw stream, I can capture 4K image without any freezing, but I have to make some color transformation to obtain the image similar to main stream in color context.

Bra1nsen commented 1 year ago

Hey @davidplowman,

I hope youre feeling well. Iam currently on the way to finish my master thesis and Iam summarizing all the stuff I have learned from you thorufh this Issue.

If you could have a quick look, if thats so far correct and whats missing/wrong? Is the gamma transformation part of the sRGB standard, I mean if one would revert it, would we leave the sRGB room?

image image image image

Kind regards, Paul

davidplowman commented 1 year ago

Hi, yes I think that looks basically correct to me, though I'm not always sure I've followed the notation. But let me talk through the 4 equations you've got there, hopefully that will fill in details.

  1. "RGGB = ..."

I assume this is describing what an "RGGB" Bayer pattern looks like. Obviously we know that that are 3 more R/G/B permutations possible. I would describe a W x H sized image as being made up of W/2 x H/2 little 2x2 Bayer cells like the one you have shown. So that all seems to make sense to me!

  1. "R = R ..."

So this looks like its describing a simple Demosaic procedure that turns your W x H sized Bayer image into a W/2 x H/2 fully-sampled (3 channel RGB) image. Obviously there are many alternative Demosaic methods that keep the full resolution (so you get a W x H fully-sampled output), but these start to become quite complex, and many involve sophisticated interpolation and edge-detecting techniques. But the basic method you've described is very simple, very fast and works very well for many applications (as you've discovered!).

  1. "CGM = ..."

So I was confused for a moment by what CGM meant, but I infer that it's something like "Colour Gain Matrix". The repeated use of uppercase-G for these 3 colour gains also took a moment to figure out, but maybe that would be OK with a bit of extra explanation. Anyway, your definitions of the CGM and CCM seem fine. I wonder if you want to explain a bit more around the definitions? For example, we normaly use the CGM just to correct for white balance, and the CCM then converts camera RGB to sRGB. A usual property of the CCM is that every row sums to 1, as this ensures that grey input (R, G, B all equal) give grey output.

  1. "I_S_RGB ..."

This looks like it's defining the gamma transform. It turns linear light values into non-linear ones, which is expected by most display devices (it was historically a requirement for CRTs, but now we're kind of stuck with it). Just as a small note, our imaging pipeline applies the CGM before Demosaic (Demosaic works better after white balancing), and the CCM after (because only after Demosaic do you have an R, G, and B sample at each location).

The colour space does specify the transfer function associated with it, and also the RGB <-> YCbCr matrix that should be used. We normally convert camera RGB to sRGB, which has an associated YCbCr matrix and also a transfer (gamma) function. There's a definition of them here. Note that our pipeline never uses the "official" sRGB transfer function because it creates very washed-out looking images; I've never encountered a camera vendor that does. Our gamma curve is defined in the JSON camera tuning file, under "rpi.contrast".

Hope that helps!

Bra1nsen commented 6 months ago

Hi David,

Thanks for your help. I adjusted as following:

image

image

image

image

image

I hope that makes sense now!

&

We're still using the Raspberry HQ Cam, and I'm currently revisiting the task of capturing exposure time series. If you've worked on this in the meantime and have any updates or recommendations, I'd love to hear them. We're also searching for an intelligent exposure mode that can dynamically adjust exposure time to optimize bit depth across varying scene brightness.

image

I hope you're doing well, David. I'd love to hear from you again; your support has always been invaluable!

Best, Paul

davidplowman commented 6 months ago

Hi, and thanks for the message. I've been away for a couple of weeks (occasionally they let me out for good behaviour!), so apologies for the delay.

I think what you wrote there all makes sense to me, those look like the right steps to get an sRGB image.

I don't think there have been any particular changes in the way the camera system works in this area. We're still trying to move forward with a signalling mechanism which would tell you when particular settings have been applied, but have so far not achieved any progress in libcamera yet. (We've been a bit busy with Pi 5, among other things!)

Are you still selecting a range of fixed exposure times? The Pi's AEC/AGC is quite good at selecting exposure times that, for example, "saturate no more tha x % of pixels", I don't know if that would help. Obviously you can do that for yourself by analysing the output images, but letting the Pi do it benefits from the hardware capabilities.

Bra1nsen commented 6 months ago

Hi David,

Thanks for your reply—great to hear from you! I hope you had some relaxing days.

Our team is currently in the process of starting a company: www.wematics.com. We will soon begin our first field trials using Raspberry Pi-based all-sky cameras.

Currently, we are capturing images using:

libcamera-still -o /path/to/output_file.jpg --ev -2 --denoise cdn_off --post-process-file /home/pi/Desktop/skycam/scripts/hdrgpt.json --immediate -n

with the following post-process configuration:

{
    "hdr" : {
        "num_frames" : 8,
        "lp_filter_strength" : 0.2,
        "lp_filter_threshold" : [ 0, 10.0 , 2048, 205.0, 4095, 205.0 ],
        "global_tonemap_points" : [
            { "q": 0.1, "width": 0.05, "target": 0.15, "max_up": 20.0, "max_down": 5 },
            { "q": 0.5, "width": 0.05, "target": 0.45, "max_up": 20, "max_down": 2 },
            { "q": 0.8, "width": 0.05, "target": 0.7, "max_up": 20.5, "max_down": 0.5 }
        ],
        "global_tonemap_strength" : 0.1,
        "local_pos_strength" : [ 0, 6.0, 1024, 2.0, 4095, 2.0 ],
        "local_neg_strength" : [ 0, 4.0, 1024, 1.5, 4095, 1.5 ],
        "local_tonemap_strength" : 1.0,
        "local_colour_scale" : 1.5
    }
}

As I understand it, this isn't "real" HDR, meaning no exposure time series are captured. Instead, the output image is processed to be HDR-like. Is that correct?

I may soon start a PhD focused on capturing exposure time series and all-sky camera self-calibration. It would be great to consult with you occasionally over the upcoming months regarding script optimization. Ideally, we could develop an intelligent exposure time series capture container similar to the libcamera app command. At the very least, I plan to make some progress on this soon.

Best, Paul

davidplowman commented 6 months ago

That all sounds exciting!

The "post-processing HDR" is not the best implementation of HDR. It does add lots of low-exposure images together, so in this sense it does manage to preserve highlights compared to a conventional capture, whilst still reducing noise in the dark areas. But it adds together images that have been through the full pipeline, including gamma. Thus the brightness is no longer linear, and adding these images can end up with results that look a little strange.

Adding linear images is clearly a better idea. You could capture raw images and add those directly, but then you still have to do the raw to RGB/YUV conversion yourself (using dcraw or some other software package). Alternatively you could turn off the gamma and then (as the outputs are now linear) add the RGB/YUV images, but you would be throwing away 4 of the 12-bits in each pixel to get 8-bit output images.

OpenCV has some quite good pseudo-HDR algorithms for non-linear 8-bit images (such as Mertens Merge), so those might be worth investigating. For Pi 5 we're quite keen to implement a raw to RGB/YUV converter using the on-board ISP so it would be easy to use and blindingly fast, but of course we haven't actually done it yet. And I'd quite like to incorporate some of our latest ideas for HDR tonemapping (much better than in the post-processing stage), which would be super exciting, only there's so much to do...!

Anyway, very happy to continue discussing ideas with you as you make progress!