raspberrypi / picamera2

New libcamera based python library
BSD 2-Clause "Simplified" License
852 stars 181 forks source link

[HOW-TO] Synchronize gain adjustment between two Raspberry Pi cameras #1116

Open Raccoon987 opened 1 week ago

Raccoon987 commented 1 week ago

I have two externally triggered Raspberry Pi global shutter cameras connected to a Raspberry Pi 5, with each camera running in its own thread. They capture nearly identical but slightly shifted fields of view, and I can apply an affine transformation to spatially align them. However, the luminous flux between the two cameras differs by up to 10%. Both cameras have a fixed exposure, but due to the shifted fields of view and the difference in light flux, each camera pre-processes its image differently and selects a different analog gain.

My goal is to find a fast way to make the output image arrays as pixel-wise equivalent as possible in terms of pixel brightness.

I've plotted the full range relationship between pixel intensities from both cameras, to create a lookup table. But this is only valid when both cameras have the same fixed analog gain.

Is there a way for the first camera to automatically select the optimal gain (with AeEnable=True) while locking the second camera to that same gain value? In other words, the first camera would adjust its gain, and the second camera would then match its gain to the first camera.

I appreciate your help in advance.

davidplowman commented 1 week ago

Ah, I see that I've answered this on the forum! Might it be easier to keep the discussion here, going forward?

Raccoon987 commented 6 days ago

Before we continue/not continue the discussion here, could you provide a link to the issue where it was previously discussed? Because it depends on the details discussed there. My main goal is to achieve intensity-equivalent images, and gain synchronization is only one possible solution.

davidplowman commented 6 days ago

I was referring to the reply that I posted here: https://forums.raspberrypi.com/viewtopic.php?t=376829#p2254996

Raccoon987 commented 6 days ago

I think we can continue the discussion here.

My Global Shutter Raspberry Pi cameras are externally triggered using a Raspberry Pico, as outlined in the GS camera manual. This allows me to capture images simultaneously, with equal exposure set for both cameras. However, explicitly setting the analog gain disrupts synchronization, as does setting different exposures for each camera. I'm not sure why this happens—perhaps you could explain it to me. I only need monochrome images, so I've disabled AWB (AwbEnable = False) and set {"rpi.awb": {"bayes": 0}} in the imx296.json file. The best approach for producing equal images would be to equalize the light flux before it reaches the camera lenses. Unfortunately, for various reasons, I can’t physically make the light fluxes equal. My next option is to reduce the sensitivity of one of the camera sensors to balance the light. I've found that the camera ISO can only be adjusted by controlling the exposure and analog gain. This is the last stage where linear reducing of light or sensor sensitivity proportional to the 20% difference in light flux may lead to obtaining equal images. Beyond this point, the accumulated and converted light signal is processed by image preprocessing algorithms controlled by the imx296.json file. Since the cameras receive different light fluxes, they independently calculate the gain values. After several experiments, I noticed that the difference between the resulting images seems to depend on pixel intensity. The ratio between the corresponding bright pixels from both cameras is not the same as the ratio between mid-range or dark pixels — there's a nonlinear relationship. I plotted pixel intensity from the first camera against the second camera for the full range (0 to 255), and this relationship was nonlinear even when gain and exposure were fixed. Without locking the gain, there’s an additional uncontrolled variation in intensity, as each camera selects its own gain. When I set AeEnable = False, I get synchronized image capture, with the analog gain fixed at 1 for both cameras — but the images are too dark. I don’t want to completely disable the gain adjustment algorithm because it’s useful. I realize this issue extends beyond the original topic title — sorry for that.

Any ideas on how I could solve this problem?

davidplowman commented 6 days ago

Just to understand, can you say a little more about what you're doing? I think I understood that:

Raccoon987 commented 5 days ago

According to the Raspberry Pi camera documentation (https://www.raspberrypi.com/documentation/accessories/camera.html), I connected the camera's GND and XTR pins to the Raspberry Pico and ran the MicroPython code on the Pico controller.

sudo su
echo 1 > /sys/module/imx296/parameters/trigger_mode
exit
from machine import Pin, PWM
from time import sleep

pwm = PWM(Pin(28))
framerate = 60
shutter = 2000  # In microseconds
frame_length = 1000000 / framerate
pwm.freq(framerate)
pwm.duty_u16(int((1 - (shutter - 14) / frame_length) * 65535))

Afterward, I ran the main() function and successfully achieved synchronized image capture. To verify this, I ran the check_sync() function and got an output like:

16000
18000
10000
22000
... 

When I uncommented the line "AnalogueGain": 8.0 in the start_camera(index) function and ran check_sync() again, I got an output like

16000000
12000000
11000000
...

The difference is three orders. The difference between camera's timestamp now is estimated in milliseconds, not microseconds. So I conclude that I break synchronization. The same break happens for different exposure times but it is quite obvious: shutter value is explicitly defined in micropython code.

from picamera2 import Picamera2
import threading
import time
import cv2
import numpy as np
import copy
import pprint

def capture_and_process(picam, result_list, meta_list, index):
    request = picam.capture_request()
    metadata = request.get_metadata()
    array = request.make_array(name="main") 
    array = cv2.cvtColor(array, cv2.COLOR_RGB2GRAY)
    result_list[index] = array
    meta_list[index] = metadata
    request.release()

def capture_timestamp(picam, result_list, index):
    request = picam.capture_request()
    metadata = request.get_metadata()
    ts = int(metadata["SensorTimestamp"])
    result_list[index] = ts
    request.release()

def start_camera(index):
    picam = Picamera2(index)
    print("Camera sensor modes: ", picam.sensor_modes)
    config = picam.create_preview_configuration(
        controls={"FrameDurationLimits": (16667, 16667),
                  "FrameRate": 60,
                  "ExposureTime": 2000,
                  "Saturation": 0,
                  "AwbEnable": False, 
                  #"AnalogueGain": 8.0,
                  })
    print(f"camera {index} main config: ", config["main"])
    picam.start(config)
    time.sleep(0.5)
    return picam

def check_sync():
    picams = [start_camera(i) for i in range(2)]
    results = [None] * len(picams)  
    try:
        c = 0
        while True:
            threads = [threading.Thread(target=capture_timestamp, args=(picam, results, index)) for index, picam in
                       enumerate(picams)]

            for thread in threads:
                thread.start()

            for thread in threads:
                thread.join()  
            c += 1
            if c % 20 == 0:
                print("timestamp delta between two cameras: ", results[0],  results[1], abs(results[0] - results[1]))    
    except KeyboardInterrupt:
        # Ctrl + C to properly stop cameras
        print("Stopping cameras...")
    finally:
        [c.stop() for c in picams]
        print("Cameras stopped.")  

def main():
    picams = [start_camera(i) for i in range(2)]
    results = [None] * len(picams)
    metadata = [{}] * len(picams)

    try:
        while True:
            threads = [threading.Thread(target=capture_and_process, args=(picam, results, metadata, index)) for index, picam in
                       enumerate(picams)]

            for thread in threads:
                thread.start()
            for thread in threads:
                thread.join()

            cv2.imshow('Master/Bottom', np.flip(results[0], axis=1))
            cv2.imshow('Slave/Top', results[1])

            if cv2.waitKey(1) == ord('q'):
                break
    except KeyboardInterrupt:
        pass
    finally:
        [c.stop() for c in picams]
        print("Cameras stopped.")  

So answers to your questions:

1) yes 2) yes 3) yes 4) Construction like this one:

while True:
    cam2.set_controls({'AnalogueGain': cam1.capture_metadata()['AnalogueGain']})

It is fine for me if it doesn't break synchronization and does not lead to dropped frames. I'll check it.

5) How may I turn off the gamma transform? Based on my experiments, the pixel intensity relationship is almost linear in the low and mid-range intensity regions but becomes nonlinear for bright pixels. In the linear region, the slope changes slightly. I prefer a fully linear response, as I don’t need a 'nice' picture—just one that is simple and predictable.

6) I want the first camera to automatically adjust its gain, as this adjustment does a good job of keeping the image neither too bright nor too dark. I would then like to link the second camera's gain to that of the first. Otherwise, depending on the environment, the first image might be either brighter or darker than the second, depending on the camera's preprocessing algorithm. This behavior is unpredictable for me.

davidplowman commented 5 days ago

Thanks for all the information. I probably need to pass this on to someone who has actually used the external trigger mechanism, but unfortunately he's on holiday so it would be into next week before he could get back to you.

But just to comment on a few other things:

  1. When you quoted those numbers (16000, 18000 and so on), it wasn't clear to me what they were. I couldn't spot where you were printing them in the code either. Did I miss something or could you clarify?

  2. One problem with setting the camera's analogue gain while it is running, is that it takes several frames for it to take effect. For it to take effect immediately, you would need to stop the camera, set the analogue gain, then restart it. But that's a relatively slow process too, so it depends what kind of frame rate you are hoping to achieve.

  3. You can turn off the gamma transform by finding "rpi.contrast" in the camera tuning file and changing it to "x.rpi.contrast" (which effectively "comments it out"). The tuning file will be called imx296.json, probably under /usr/share/libcamera/ipa/rpi/pisp (Pi 5) or /usr/share/libcamera/ipa/rpi/vc4 (other Pis). Of course, the resulting image will look dark but very contrasty.

  4. To get the greyscale version of an image, it would be more efficient to avoid cv2 and ask for 'YUV420' format instead. Then you could take the top "height" rows of the array directly.

Raccoon987 commented 5 days ago

First of all, I want to thank you for this discussion and your help. I'm confident that we’ll find a solution through this dialogue.

  1. I forgot to include the capture_timestamp() function in my code. I’ve now added it to my code snippet. The numbers 16,000, 18,000, etc., represent the difference between the timestamps of frames captured by the first and second cameras. This means that the time shift between frame j of the first camera and frame j of the second camera is only 16 or 18 microseconds, indicating that the cameras capture frames simultaneously. However, values like 16,000,000 or 12,000,000 show that the difference is now measured in milliseconds, which, compared to the exposure time of 2 ms and the frame duration of 16.6 ms, indicates non-simultaneous capture.

In the check_sync() function, I start the two cameras in separate threads, retrieve the timestamps from the request metadata (using the capture_timestamp() function), and print the difference for every twentieth frame.

  1. Yes. With an FPS of 60, I can't use this method to set equal gains.

  2. I'll try this.

  3. Why is the 'YUV420' format more efficient in the case of grayscale images? Is this the correct modification?

w, h = 640, 480

def capture_and_process(picam, result_list, meta_list, index):
    request = picam.capture_request()
    metadata = request.get_metadata()
    array = request.make_array(name="main") 
    y_h = array.shape[0] * 2 // 3
    array = array[:y_h, :]
    result_list[index] = array
    meta_list[index] = metadata
    request.release()

def start_camera(index):
    picam = Picamera2(index)
    print("Camera sensor modes: ", picam.sensor_modes)
    config = picam.create_preview_configuration(
        main={
            "size": (w, h),  
            "format": "YUV420",  
        },
        controls={"FrameDurationLimits": (16667, 16667),
                  "FrameRate": 60,
                  "ExposureTime": 2000,
                  "Saturation": 0,
                  "AwbEnable": False, 
                  #"AnalogueGain": 8.0,
                  })
    print(f"camera {index} main config: ", config["main"])
    picam.start(config)
    time.sleep(0.5)
    return picam 

How to plot the obtained array? The same as before, using cv2.imshow?

Thank you.

Raccoon987 commented 5 hours ago

newplot

The x-axis represents the intensity of the pixel at [i, j] from the first camera, and the y-axis represents the intensity of the same pixel from the second camera. Blue dots show data with both cameras set to a 15 ms exposure and a fixed analog gain of 15. Red dots represent data with both cameras set to the same 15 ms exposure but with a fixed analog gain of 5. All points lie above a dashed diagonal line because the luminous flux between the two cameras differs by up to 10% or more. The relationship is nonlinear, but I can easily equalize the image intensity using this curve. However, explicitly setting the analog gain disrupts synchronization. When the gain isn’t fixed, each camera independently chooses its gain, causing the curve to shift—sometimes below the dashed diagonal if the 'weaker' first camera has a much higher gain than the second. For each frame, we get two sets of metadata. For each pair of [i, j] pixels, their intensities fall on a curve like the one shown in the image, but the curve's position and shape depend on the camera parameters stored in the metadata.

I would like to: 1) For a known gain difference between the cameras and other information from the frame metadata, be able to reproduce the full curve. Or get a function like: camera1_intensity = F(camera1_gain, camera2_gain, camera1_metadata, camera2_metadata)(camera2_intensity) Afterward, I can create a lookup table for each of the 255 intensity values. 2) (Optional) Flatten this curve to achieve a linear relationship.

imx296.json file has the following algorithms:

"rpi.black_level"
"rpi.lux"
"rpi.dpc"
"rpi.noise"
"rpi.geq"
"rpi.denoise"
"rpi.awb"         Turn off by setting "AwbEnable": False in camera controls or "rpi.awb": {"bayes": 0} in .json file
"rpi.agc"
"rpi.alsc"
"rpi.contrast"    turn off by setting "x.rpi.contrast"
"rpi.ccm"
"rpi.sharpen"
"rpi.hdr" 

Since AWB and contrast are already off, what else can I disable to achieve a linear grayscale intensity response? Also, how can I predict the curve's position based on the frame metadata and the gain difference between the cameras?