raspberrypi / rpicam-apps

BSD 2-Clause "Simplified" License
420 stars 228 forks source link

Errors using Libcamera Camera Tuning Tool to fix Lens Shading #359

Closed harshilsheth0412 closed 1 year ago

harshilsheth0412 commented 2 years ago

Hello,

I am trying to achieve Lens Shading Correction for an imx219 camera on Picamera2.

I referred to chapter 6 (Camera Tuning) - https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf and followed the steps to generate a tuning file using ctt.py and two .dng images as input at 5000k.

I generated a tuning file without errors. But when I load the tuning file via Picamera2 api or Libcamera-hello --tuning-file input, I get a segmentation fault error (logs attached below). Seems like it has something to do with the tuning file format generated by ctt.py. I realized that somehow libcamera doesn't accept the headers present in the tuning file generated by ctt.py. If I remove versions, target, and algorithms from my tuning file I was able to apply it correctly. This file has a field for rpi.alsc which I care about the most.

The image generated with the tuning file is different than one generated without the tuning file, but is not lens shading corrected. For a pure white source of light emitted directly into the camera, these are the output images I get:

Raw Image: raw

Tuned Image: tuned

Seg_fault.txt

Can someone help me use the CTT tool effectively so I can fix lens shading correctly? My questions are:

davidplowman commented 2 years ago

Hi, can you attach the tuning file that it made so that we can look at it? Please also attach the output of libcamera-hello --version.

I know that there has been some work recently on adding support for a version 2 tuning file format, so I'm wondering if you might have a version 2 tuning file, but an older libcamera? So maybe also try sudo apt update and sudo apt upgrade and see if that makes any difference. Thanks!

harshilsheth0412 commented 2 years ago

Hey David, thank you for writing back. You may be absolutely right about the version mismatch.

The tuning file that gets generated by the ctt.py tool starts with - version=2.0 image

Once I modify it to remove the header, it does not give me the segmentation fault error. The modified file looks something like this - image

However, doing a sudo apt update and sudo apt upgrade does not help with my libcamera version to match with the tuning file Version 2.0: image

I've attached the two tuning files as well, hello.txt is the default file generated. imx219.txt is the modified file that runs without errors but does not produce a white image. The tool generates .json files, converting to .txt so I can upload it here. Is there another way to force match both the CTT tool and Libcamera version?

hello.txt imx219.txt

naushir commented 2 years ago

Is there another way to force match both the CTT tool and Libcamera version?

The CTT is distributed as part of the libcamera source, so should be in sync with the libcamera library *source". Are you able to build libcamera libraries manually and install them? You can find instructions to do that here.

naushir commented 2 years ago

The above imx219.txt file contains only lens shading configuration. Presumably you are copying this section into a larger tuning file with the rest of the configuration?

harshilsheth0412 commented 2 years ago

Hey Naushir, thank you for looking into this. I tried rebuilding libcamera from the instructions you shared.

In my images folder, I just have images for lens shading and the ctt.py expects both lens shading and macbeth images. So when I run ctt.py it gives me an error saying - ERROR: No usable macbeth chart images found and does not generate a json.

To get around this, I use the script alsc_only.py which seems like a wrapper around ctt.py but for alsc images only. Hence, the imx219.txt that you reviewed has alsc config only. I've tried to apply it as it is but I am still running into the segmentation fault error that I shared initially. The only way I've found to get around this seg fault has been to get rid of these three lines in the json. "version": 2.0, "target": "bcm2835", "algorithms": [

but the json then isn't effective to correct for lens shading. Here's my libcamera-hello --version after following all the steps from Building Libcamera here

I'll try to copy the alsc block on an uncalibrated.json file and see if that helps.

image

naushir commented 2 years ago

I cannot seem to trace the libcamera version commit 0c55e522 in your build. Can you share the location of your git tree and the top commit message please?

EDIT: It seems like this may be an issue with our packaging - it does not pick up the top libcamera source commit id when generating the version string.

davidplowman commented 2 years ago

I'd like to make one extra comment about all this.

The LSC tables in the linked hello.txt are quite different from the standard ones (we've noted often how our standard imx219 modules show very strong colour shading). Anyway, because AWB is calibrated to run after LSC, it means the standard AWB calibration will almost certainly be completely wrong now. This may explain why images are coming out in strange colours.

The "official" answer would be that a complete re-tune is required, you can't just change the LSC tables (at least, not by a very large amount). As a temporary workaround you could find the "bayes" parameter in the tuning file and set it to zero. This will give you simple grey world AWB but in most circumstances you should get a better result. The colour matrices will be wrong too, but that's a less obvious effect than the AWB going haywire (and there too the answer is that a complete re-tune is required).

If you do get this running with the grey world AWB, it would be interesting to know what red and blue gains you have. You could run libcamera-hello -t 0 --info-text "%rg %bg" to see them on the title bar of the preview window.

Out of interest, is this camera going to be running in a controlled environment, or is it likely to be operating in a fairly general range of conditions?

naushir commented 2 years ago

Is there anything more to look at here? If not, I'll close this down.

harshilsheth0412 commented 2 years ago

Hey David, Naushir,

My sincerest apologies for the delay. I was able to fix the segmentation fault issue by doing these two steps:

  1. Rebuilding libcamera manually by following the steps here.

  2. Running alsc_only.py tool with two raw images captured under a white light. This generates a .json with only alsc correction. I copy and replace the alsc table into the uncalibrated.json file. Now, this .json works for me in terms of segmentation fault. It also has "bayes" set to 0.

In terms of getting a pure white image - this is what I get:

Raw Image: github_untuned

Using uncalibrated.json under a white light: github_white

So my image is much better, but still not pure white.

Furthermore, when my light source is green, the alsc table does not provide too much of a correction:

Raw: github_untuned

Using uncalibrated.json under green light

github_white

Do I have to input multiple images under different color lights for the alsc table to provide a more accurate correction across color inputs?

To answer your question David - We're using the camera in a controlled environment and our goal is to get true colors. I started pursuing the camera tuning tool for alsc correction because I thought it'd be the quickest. But if you think there's a better way to achieve it, I'm happy to course-correct. I've attached my uncalibrated.json file that generated these images below: uncalibrated.txt

naushir commented 2 years ago

Do I have to input multiple images under different color lights for the alsc table to provide a more accurate correction across color inputs?

Yes, ideally you want to capture calibration images at a range of colour temperatures, and the algorithm will interpolate between tables based on the current AWB calculated colour temperature.

When you are capturing the LS calibration images, are you pointing the device onto a flat surface, like a wall, or are you using some leds placed into the module lens?

harshilsheth0412 commented 2 years ago

Hey Naushir,

Thank you, let me try giving it more images as input and update this thread. To answer your question - We are using an LED placed under a light diffuser as our input to the camera lens.

naushir commented 2 years ago

@harshilsheth0412 are you able to do a full camera retune (not just the LS block) and report if you are still seeing any issues?

rarestg commented 2 years ago

@naushir I assume that by "full camera retune" you mean running the entirety of CTT.py. Unfortunately, we're not able to do this because our camera is, in a sense, trapped. It only points to a diffuser with an LED behind it, so we can't put a lens on it and capture Macbeth color charts. And when CTT.py sees there are no macbeth images, it fails without generating a calibration file...

We are limited by our physical setup. We have an RGB LED pointing at a diffuser, and the camera pointed to the other side of said diffuser. In the past, running PiCameraV1 and Richard Bowman's implementation of Lens Shading, we were able to do the following:

  1. Set the LED to white. At this point, the camera would capture something like the image below: harshil-1
  2. The lens shading algorithm knew that it "should" have captured a pure white, uniform image and corrected the color distortion, vignetting, etc by way of a lens shading table.
  3. We would apply that table to all subsequent images captured with the same light source setup, and it worked great.

So we don't actually have a way to modify color temperatures and follow this standard procedure. But shouldn't it be possible to create an adequate lens shading table that achieves this goal for any arbitrary lighting setup?

SCR-20221003-jzc

Otherwise, should we just try creating a "lens shading" file that is the difference between:

  1. a RAW capture of the vignetting + pink and
  2. a pure white image then applying that to every image we capture?

So for instance, if the camera capture has a pixel at [125, 255, 255], our "lens shading" file would contain [+130, 0, 0] at that pixel, bringing the result to [255, 255, 255] when applied.

naushir commented 2 years ago

The lens shading correction by default provides a roll-off in the correction strength as you go from the centre to the edges of the image. To stop this and give a fully flat image, you can set "luminance_strength": 1.0 in the rpi.alsc config block.

I assume for your use case, the lighting conditions are fixed? If so you can also disable the adaptation of the lens shading correction. This will simply use your calibration table without any adjustments. To do this, set "n_iter": 0 in the rpi.alsc config block.

Give these a try and see if it helps and we can take it from there.

harshilsheth0412 commented 2 years ago

Hey @naushir, Yes, our lighting conditions are fixed. We tried moving both the luminance_strength and the n_inter values, and while we see it impacting our image, the tool is probably adding some more correction to prevent a pure white image. Here's our finding for images under white light with the fixed setup that @rarestg described.

image

image

So combining both the suggestions you gave, n_iter: 0 and Luminance_strength:1.0 gives us this image- lum_1_niter_0

Also attaching the json we applied to these images. imx219.txt

naushir commented 2 years ago

By default, the AE will not allow the whole image to reach saturation, so you will not get a full white image.

You can override this by specifying for example --ev 2.0 to force an increased brightness. Alternatively, if you have fixed lighting, setting --shutter and --gain to some fixed values to your liking would also work.

harshilsheth0412 commented 2 years ago

Hey Naushir,

Setting --ev 2.0 really help with getting a white image.

However, one drawback here is that it also gives out a white image for other light color sources since we're forcing the brightness to be high. So for a green light source, the lens shading table generates a grey image and then forcing --ev 2.0 changes it to white.

Is there a way to retain true colors while performing lens correction such that green can be seen as green?

This is the uncalibrated image: image

This is the image after applying the ctt tuning file: image

This is the image after forcing --ev 2.0 oie_1134152eiecK39U (2)

naushir commented 2 years ago

@harshilsheth0412, I'm afraid you are going to have to help me understand exactly what you are trying to do here from the very beginning. Could you provide a detailed description along with exact commands you are using to try and achieve this.

Without having much context on your problem, I can only suggest you do not control --ev, and let the AGC converge to it's natural target. By pushing EV high, you are simply saturating the image to produce white, and in the process losing the actual data in the samples. This is probably not what you want.

harshilsheth0412 commented 2 years ago

Hey Naushir, sure, here's my explanation:

Hardware Setup :

Backgroud

Rares and I are working on an application that requires super focused images (think a microscope). In the past, we worked with the Picamera V1 library where the camera would capture an image like this when the LED was set to white.

Then we would use Richard Bowman's implementation of the Lens Shading algorithm to correct the color distortion, vignetting etc and get a pure white image. The algorithm would understand that the image should've been a pure white and generates a correction table.

We would then apply the correction table to the picamera library before image capture and we would get a uniform white image for white LED light. This also worked for other RGB color sources from the LED (aka green, blue). Note that the image generated would be white without compromising on the saturation. If I were to add a red dot on top of the diffuser, it would capture the red dot correctly without compromising either red or white.

What we're trying

Now we decided to move to PicameraV2 library which is based on Libcamera. We found the ctt.py tool which from explanation feels like it can do what Bowman's library did for lens shading.

image

At this point we started tuning values in the .json file like n_iter and luminance strength. We also changed --ev and --shutter which helped saturate it to white but we would lose other colors on the image. So if I were to have a red dot on my diffuser, forcing saturation would cause us to lose information which isn't ideal. Also, the setup now doesn't differentiate between white light and green light which won't work for us. We're looking for a clean image for every color without the pink vignetting that's getting added. Then we can apply the correction factor every time we capture an image.

Here's what we see for green:

Raw image using uncalibrated.json :

Image after using imx219.json as --tuning-file :

naushir commented 2 years ago

@harshilsheth0412 thank you for the explanation! I'll start with the following suggestion:

1) For the shading calibration, you ought to be capturing the images with the camera module placed directly on the LCD, ideally with a diffuser placed in-between. Are you able to do this? Additionally, you should take multiple images positioned slightly differently for each colour temperature. This will allow the CTT to average results out.

2) Once calibrated, you can manually set shutter and gain to some reasonable values, but ensure you do not saturate the image. Do not use the ev option. To get a "pure white" image, you ought to switch to manual white balance (awbgains options), and set the Red/Blue gains appropriately for the colour temperature your LCD emits. You may also have luck using one of our presets using the awb option instead of manual gains.

Given your illumination is fixed, I would advise you switch to manual shutter, gain and white balance and disable the auto algorithms entirely.

naushir commented 2 years ago

@harshilsheth0412 were you able to try out any of my suggestions above?

harshilsheth0412 commented 2 years ago

Hey @naushir, thank you for all the help. We were able to get a non-vignetted pure white image using --shutter 100000 and --gain 1.5.

Here's the command we used: libcamera-still --tuning-file /home/pi/libcamera/src/ipa/raspberrypi/data/imx219.json --shutter 100000 --gain 1.5

Here's the answer to your questions plus my follow up questions:

  1. Yes, we've placed our Camera Lens Setup directly on top of an RGB LED with a diffuser separating the two. To your point of - take multiple images positioned slightly differently for each colour temperature, we can move our camera lens setup vertically by a few millimeters, so we can capture images positioned differently on the Y axis, however we have no control over determining and controlling the colour temperature. Is there a way around it?

  2. Setting the --shutter was what fixed it for us. So my understanding is that the lens + camera produced a vignetted image because it didn't get enough exposure time and setting the --shutter value helped with that, is that correct?

This may be a Picamera2 question, but I am now trying to emulate the following command into the picamera code using @davidplowman's example here.

The command: libcamera-still --tuning-file /home/pi/libcamera/src/ipa/raspberrypi/data/imx219.json --shutter 100000 --gain 1.5

My code:

            tuning = Picamera2.load_tuning_file("imx219.json")
            camera = Picamera2(tuning=tuning)
            camera.configure(camera.create_still_configuration(main={"size": (3280, 2464)}))
            algo = Picamera2.find_tuning_algo(tuning, "rpi.agc")
            algo["exposure_modes"]["normal"] = {"shutter": [1000000, 150000], "gain": [1.4, 1.6]}
            camera.start()
            camera.capture_file('test.jpg')
            camera.stop()

But my resultant images are WAY BRIGHTER than what I saw on the command line. Is there some picamera default config that is causing this. I guess this'd be pretty straightforward but maybe my code is missing some basic setup step.

I placed a simple paper with multiple colors colored onto it under my camera and here's the difference between using the libcamera command and picamera.capture_file():

Command image: image

Capture File: wtf

naushir commented 2 years ago
  1. Yes, we've placed our Camera Lens Setup directly on top of an RGB LED with a diffuser separating the two. To your point of - take multiple images positioned slightly differently for each colour temperature, we can move our camera lens setup vertically by a few millimeters, so we can capture images positioned differently on the Y axis, however we have no control over determining and controlling the colour temperature. Is there a way around it?

If you are only ever using the single led with the same colour temperature, there is no need to do anything more.

To set manual exposure/gain in picamera2, look at the following example: https://github.com/raspberrypi/picamera2/blob/main/examples/exposure_fixed.py or look at controls_*.py in the same directory.

If there is nothing further to look at for this issue, feel free to close it down.

davidplowman commented 2 years ago

Hi, just a couple of things. In Picamera2, if you want to adjust the tuning I'm sure you need to do it before creating the Picamera2 object. Also, I wonder whether this line has a typo:

            algo["exposure_modes"]["normal"] = {"shutter": [1000000, 150000], "gain": [1.4, 1.6]}

It looks to me like there's an extra zero in the first shutter number, though as noted above, it might be having no effect at all!

But as Naush said, you can set the shutter time and gain exactly as you do in the libcamera-still command and you should get an indistinguishable result (just be sure to set them before camera.start()).

harshilsheth0412 commented 2 years ago

Yes, I had the order of setting exposure modes wrong. Works now!!!

Thank you so much folks for all the help and your valuable time! Closing this thread.

harshilsheth0412 commented 2 years ago

Hey @davidplowman, @naushir,

I am reopening this issue to get some more help here. Seems like both the image correction parameters - shutter speed and gain, that worked for me initially provide a true white at the cost of some image data loss, which works when our target capture is well defined, but does not work when we're trying to capture granular details like finer image details under a microscope.

A quick recap:

We're trying to build a microscope using this hardware setup:

Hardware Setup :

A camera (a combination of imx219 image sensor + a 2.5x objective compound focus lens) fitted vertically facing downwards towards a flat platform. The camera is fixed and placed inside a plastic enclosure. The flat platform consists of a light diffuser and an RGB LED that is placed under the diffuser. The RGB LED is emitting colored light (white, green blue etc.) vertically going upwards, straight through the diffuser, through the compound lens and onto the imx219 sensor.

To get rid of any lens shading that came with this setup, I used the ctt.py tool with one or two images, and as @naushir suggested, configured the tuning parameters to reflect this:

libcamera-still --tuning-file /home/pi/libcamera/src/ipa/raspberrypi/data/imx219.json --shutter 100000 --gain 1.5

This worked really well when the image under the lens was a colored piece of paper like this:

image

However, because this device is to act as a microscope, our actual field images require us to capture more granular details, something like this:

image

This image is a result of only using the camera tuning tool without additional gain or shutter speed adjustments.

Now, any shutter speed over 7500 results in a complete whitening of the image, making it useless. I tried to find the optimum shutter speed and gain (around 5000 and 1.1) but that also results in blurry images and data loss around the edges, which isn't ideal for our application.

image

Based on this, I figured that the best way to get a good white image without losing data is to just use the ctt.py tool and the tuning script it generates and avoid corrections over it. Question is - What inputs do I give it to get an image that's pure white on the edges as well as it is in the centre.

naushir commented 2 years ago

@harshilsheth0412 it sounds like you really ought to switch to manual white balance settings for your use case. If you have a known fixed colour temperature emitted from you backlight, this likely will give you better results than over-exposing the image until saturation. Would you be able to try this with the --awbgains command line argument

naushir commented 2 years ago

Another thought - could the blurriness on the edges be attributed to the lens you have attached? You may not have seen this effect in the earlier captures because of the lack of fine details.

harshilsheth0412 commented 2 years ago

I am using a standard RGB LED with a diffuser so I am estimating the color temperature to be 5200K, although I can fine tune the number via trial and error. Assuming 5200k is the right color temperature, how do I find the right values for --awbgains (red,blue)?

As for the blurriness, a good white to black transition helps our software identify these details better, which makes producing the white background crucial for us.

davidplowman commented 2 years ago

The colour temperature calibration curve for the imx219 is here. So you can see that the normalised red and blue values (compared to green) for 4600K are 0.6079, 0.6721 (this line). So the gains to try are the reciprocals of these values, i.e. --awb-gains 1.645,1.488. After that you'll probably have to fiddle a bit with trial and error to see what works best for you.

harshilsheth0412 commented 2 years ago

So I tried to fidget around with those two values, but for no value did it seem even close to white.

this is for --awb-gains 1.75, 1.42

image

I tried a few values from the "ct_curve" list, and it did generate a uniform colored image, just not a uniform white.

naushir commented 2 years ago

Can you provide the exact command line used?

davidplowman commented 2 years ago

Just wondering very slightly if it's sensitive to the space in between those numbers...?

naushir commented 2 years ago

Just wondering very slightly if it's sensitive to the space in between those numbers...?

That was exactly my thought :) Trying it out myself - and the parser throws an error if I add a space.

harshilsheth0412 commented 2 years ago

Apologies, I added the space while typing. Here's the copy pasted command I used :)

libcamera-still --tuning-file /home/pi/libcamera/src/ipa/raspberrypi/data/imx219.json --awbgains 1.71,1.44

naushir commented 2 years ago

It might make sense to try and see what the AWB algorithm thinks the gain values ought to be. If you run

libcamera-still --tuning-file /home/pi/libcamera/src/ipa/raspberrypi/data/imx219.json --info-text "Red: %rg Blue %rg" -t 0

the titlebar should show you what gains AWB has converged to.

harshilsheth0412 commented 2 years ago

That's helpful, the values being used are (0.94,1.36), I think the problem is more with the patchiness of the white as seen on the right side of the image,

image

Isn't the AWBGAINS more to adjust the overall image red,blue value uniformly across the entire image?

naushir commented 2 years ago

Isn't the AWBGAINS more to adjust the overall image red,blue value uniformly across the entire image?

That's correct, it is applied uniformly across the whole image and in this case, it is giving you your white colour without oversaturating the image data.

The colour around the edges is likely due to not having an accurate lens sharing calibration. Perhaps this lens produces extreme gains on the edges that we cannot cope with, or perhaps a problem with the calibration procedure? Can you share the table(s) that the CTT has generated?

Are you absolutely sure you are using a 5200K led? The AWB seem to converge to a completely different value. @davidplowman assuming there is only a single LS table in the tuning, no interpolation should happen right? So the AWB detected colour temperature does not matter...?

davidplowman commented 2 years ago

@davidplowman assuming there is only a single LS table in the tuning, no interpolation should happen right? So the AWB detected colour temperature does not matter...?

Correct. This illumination does seem to have quite high red and blue compared to what we see normally, though I don't think that should matter so long as things as calibrated appropriately.

harshilsheth0412 commented 2 years ago

@naushir, I've attached the correction file that ctt generates below. I used two identical images under the name alsc_5100K_1.dng, alsc_5200K_1.dng. I don't have a way to test the color temperature so I was just relying on an approximate white light that my RGB light emits to be like a white LED Bulb, but yes, from those R and B values, 5200k seems way off. Is there a table that maps all these values to their corresponding color temperatures? Maybe I can reverse engineer the color temp, generate a correction table for the color temp and see if it helps?

imx219.txt

naushir commented 2 years ago

Maybe I can reverse engineer the color temp, generate a correction table for the color temp and see if it helps?

From what @davidplowman commented, this will likely not make a difference if you only ever calibrate a single colour temperature. My advice would be to look at why your LSC calibration is not producing a fully flat output. Are you able to try this with another lens perhaps?

EDIT: Based on what you said, you should only have a single colour temperature LS table calibration. So you should name your calibration files something like alsc_5100K_1.dng, alsc_5100K_2.dng.

harshilsheth0412 commented 2 years ago

Hey Naushir, unfortunately I don't have the option of changing our lens because of our setup limitations. Is there another way to debug and get to a fully-flat output? Can parse through the generated json and blindly copy a fixed set of center values throughout the table assuming that the center matrix always produces the best white ?

naushir commented 2 years ago

Hey Naushir, unfortunately I don't have the option of changing our lens because of our setup limitations. Is there another way to debug and get to a fully-flat output? Can parse through the generated json and blindly copy a fixed set of center values throughout the table assuming that the center matrix always produces the best white ?

I'm afraid this is not going to work. Lens shading effects are more extreme on the edges of the lens. The centre coefficients will not correct the edges appropriately.

Can you describe your LED light source a bit more? Is is in any way adjustable for lux or colour temperature. Does it require a certain amount to time to become stable for illumination/colour temperature? Perhaps it is worth keeping the LED on for a period (10mins or more maybe) before taking calibration/test images. Additionally, are there any external light sources that could be changing?

Would you be able to take a set of DNG calibration images and share them with us? Ideally, you want to take multiple (2-3) images with the sensor at 0 degree, 90 degree, 180 degrees, and 270 degrees orientation.

harshilsheth0412 commented 2 years ago

Hey Naushir, here's the LED part number we're using to give you a better idea - ltw-5630azl50. The datasheet suggests the color temperature for our part to be 5000K. We give the light source enough time to settle. Calibration does not happen at bootup, we've tried by letting the LED settle on white for a while (10 minutes) before taking the images but that doesn't help.

The setup is enclosed in a 3D printed black enclosure to prevent any other light from interfering. I can take DNG images yes, but our sensor is fixed so a physical rotation is impossible. Do you mean taking images by just rotating the image using a libcamera --rotation parameter?

naushir commented 2 years ago

The setup is enclosed in a 3D printed black enclosure to prevent any other light from interfering. I can take DNG images yes, but our sensor is fixed so a physical rotation is impossible. Do you mean taking images by just rotating the image using a libcamera --rotation parameter?

Can you rotate the LED in the housing? If you cannot physically rotate the sensor or led, it does not matter much. Taking multiple images either way would help average out the errors.

naushir commented 1 year ago

If there is nothing further to be done here, I'll close this issue now.

rwb27 commented 1 year ago

Hi @naushir, a colleague sent me a link to this and I thought I'd chime in: I think you've probably already found most of what I have to say on this topic (a paper and some code) but I will briefly share that I think your problem is not quite what's understood by "camera tuning" because you're not trying to make something that will function well in different conditions.

When I use my microscope, I am trying to answer the question "what is the transmission of my sample", which means I want to cancel out the effect of my illumination. So, I acquire an image with no sample (i.e. it should give 100% transmission) and make sure that calibrates out to white (or at least bright gray, because I don't want saturation anywhere). Then, images I take should reflect the absorption of the sample, and not the characteristics of my system.

This is a much simpler correction than most cameras do - I am effectively dividing every subsequent image by my initial image, with fixed white balance/exposure/gain/etc. but it's appropriate for a microscope (at least in bright field). Fixing all those settings also means that a green image should come out green.

Unfortunately, this corrects for less-than-100% transmission of each colour channel, but not for leakage of green light into the red pixels (a result of the CRA compensation on the sensor - described in the paper). That requires not just gain adjustment across the camera, but also a matrix to "unmix" the colours at the edges. This wasn't possible using the old picamera interface (or picamerax, which I hope we can retire soon), but I'd be really interested to know if it's something we could do with the new system. I have not yet had time to get to know it. Others have mentioned that the three colour channels on your camera are not the full story and that's very true - but unmixing the channels gets you most of the way there if your illumination is fixed.

In short, I would expect that your images could be calibrated to be uniform white with no sample, but that the saturation of the colours will fall off towards the edges. That's something you can correct for in post processing, but not (AFAIK) in real time.

Lastly, when using it for more scientific applications, it may be a good idea to work only with the raw pixel values, if you can cope with the speed penalty. This avoids any nonlinearities that might creep into your system from other processing steps, and puts you in full control. It usually means images look slightly less visually appealing, but personally I am in favour of seeing all the noise then trying to make my measurement, rather than working with already-processed data that may have unexpected artifacts.

naushir commented 1 year ago

Hi @rwb27, thank you for your insights! As you've no doubt seen, libcamera/picamera2 is vastly different and much more capable than the old firmware based picamera interface. In particular you have much better control of what/how the algorithms work, and one of the key things here is the disable the "auto" aspects of some of our algorithms.

Starting with AE/AWB, you've mentioned that we can simply provide a manual shutter/analogue gain and red/blue gain values. Lens shading/vignetting correction is a bit more complicated, you have a 16x12 grid of luminance/Cb/CR correction gains to be applied. The correction grids also adapt over time through a software algorithm - this can be turned off by setting the "rpi.alsc.n_iter" key to 0 in the tuning files if that is suitable.

That requires not just gain adjustment across the camera, but also a matrix to "unmix" the colours at the edges. This wasn't possible using the old picamera interface (or picamerax, which I hope we can retire soon), but I'd be really interested to know if it's something we could do with the new system. I have not yet had time to get to know it.

I think I understand the operation required here, it is different from the above shading/vignetting correction. Unfortunately, we do not have any hardware in the ISP that can perform this sort of correction. The only way would be to retrieve a RAW/DNG from the device, apply this per-pixel matrix operation across the image, and allow an application like Rawtherapee convert/process the image from DNG into RGB.

This won't help you just yet, but there are plans for libcamera to support a "reprocessing" mode, where you can capture the RAW frame from a device, apply some software processing, and inject the frame back through the hardware ISP for another processing pass. This streamlines things a bit more for applications like yours.

@davidplowman may also have some thoughts on this.