raspberrypi / picamera2

New libcamera based python library
BSD 2-Clause "Simplified" License
852 stars 181 forks source link

[HOW-TO] Output Correct Raw Pixel Intensities Across Different Exposure Times #1040

Closed JosephDemarest closed 3 months ago

JosephDemarest commented 4 months ago

Describe what it is that you want to accomplish I am attempting to determine the noise floor of the camera by capturing images at various exposure times and then analyzing the average pixel intensities. I expect the pixel intensity values to vary with changes in exposure time, particularly I'm using raw capture without any demosaicing to get true sensor output in a grayscale format.

I want an array of counts basically. The amount of counts hitting each pixel.

Describe alternatives you've considered I've attempted both capturing raw images directly using libcamera-raw command and also through a Python script utilizing Picamera2. The pixel values I retrieve are always between 254-256, regardless of the exposure time set (ranging from 100 microseconds to 1 second). I'm doing this with the camera covered to see if I can get some values for electronic noise / dark current noise, but the values are always the same. I'm assuming this suggests an issue with how I am capturing or interpreting the raw data. Here's what I've tried:

Adjusting the camera settings such as exposure time and analog gain. Using different methods to capture and process the raw data. Checked this behavior with a brand new camera module to ensure hardware issues like my chemically removed Bayer filter aren't influencing results. (It isnt)

Additional context Below is the Python code I've been using for these tests. It configures the camera, captures images at different exposure times, and calculates average pixel intensities. Despite the camera being covered, the pixel intensities remain consistently at ~255 across all exposure settings, which is unexpected and leads me to suspect a flaw in how the raw data is being captured or processed.

import time
import numpy as np
from picamera2 import Picamera2
from matplotlib import pyplot as plt

picam2 = Picamera2()
config = picam2.create_still_configuration(raw={'format': 'SBGGR12', 'size': (4056, 3040)})
picam2.configure(config)

print('Sensor configuration:', picam2.camera_configuration()['sensor'])
print('Stream Configuration:', picam2.camera_configuration()['raw'])

exposure_times = [100, 1000, 10000, 100000, 1000000]
average_intensities = []

for exposure_time in exposure_times:
    picam2.set_controls({"ExposureTime": exposure_time, 'AnalogueGain': 1.0})
    time.sleep(1)
    picam2.start()
    time.sleep(5)
    data8 = picam2.capture_array('raw')
    data16 = data8.view(np.uint16)
    avg_intensity = np.mean(data16)
    average_intensities.append(avg_intensity)
    plt.figure()
    plt.imshow(data16)
    plt.colorbar()
    plt.show()
    picam2.stop()
    time.sleep(5)

for i, exposure_time in enumerate(exposure_times):
    print(f'Exposure Time: {exposure_time / 1000:.3f} ms - Average Intensity: {average_intensities[i]:.2f}')

I also used this command with libcamera-raw which shows the same issue:

libcamera-raw -t 1000 --analoggain 1.0 --shutter 1000000 --mode 4056:3040:12:U -o test1000000.raw

I am most likely missing something very simple...

EDIT: fixed the code tags

davidplowman commented 4 months ago

Hi, thanks for the question. Is it maybe just that you're not accounting for the sensor black level? The raw images from sensors always contain a pedestal value which has to be subtracted to get the true pixel levels. So for a completely black image, the pixels should all be at around the black level value (they can be a little higher or lower).

For 10-bit raw modes, the black level is normally 64, and for 12-bit raw modes it is usually 256. (Our v1 camera is different, but that's the only obvious exception that comes to mind.)

JosephDemarest commented 4 months ago

Hi. Thanks for that info, I'm going to look into that and get back to you.

Also, The metadata for my images say my HQ cam has 4096 black level. Is that normal?

davidplowman commented 4 months ago

Yes, because the black level reports back the value based on 16-bit pixels (which is what our tuning files use, because we want them to be as similar as possible irrespective of whether we have a 10 or 12 bit sensor). So dividing 4096 by 16 (to turn 16 bit values into 12) gives you 256.

JosephDemarest commented 4 months ago

Thank you for the clarification regarding the black level. Based on your explanation, I understand that the black level for my 12-bit HQ camera should effectively be around 256. However, my observations seem inconsistent with what we might expect given this black level adjustment.

The raw output from my HQ camera consistently shows pixel values ranging from 45 to 47, which is substantially below the expected baseline of around 256 for a completely black image. This raises a question: is it possible that the subtraction of the black level (256) has already been applied to the raw data I'm accessing? If that were the case, it would imply the actual captured values were around 301 to 303 (46 + 256). However, this still doesn't align with the expected baseline for a 12-bit sensor.

More puzzling is the fact that these values do not change regardless of the exposure time. I have tested exposure times varying from 100 microseconds to 10 seconds. Under normal circumstances, even in a covered sensor setup aimed at measuring electronic noise or dark current noise, we would expect some variation in pixel values with such significant changes in exposure time.

Additionally, adjustments to the camera's operating temperature show little to no effect on the measured values. This is unusual since sensor noise characteristics typically vary with temperature changes.

My goal is to accurately determine the average pixel values across the entire sensor when completely covered, under various exposure times and temperatures, to assess the noise floor and sensor performance. These measurements are critical for my application, which relies on precise noise estimation to calibrate other imaging processes.

Could there be a misunderstanding or an issue with how I'm accessing or interpreting the raw data? Is there possibly a firmware or hardware-related factor that could cause the sensor output to behave this way, especially considering the lack of variability in response to changes in exposure and temperature?

I've tried this on both a RPi4 and RPi5 and see similar results. I plan on continuing to use the RPi5 going forward.

I appreciate your insights and any further guidance you could provide to help resolve these discrepancies.

davidplowman commented 4 months ago

It feels like something just isn't set up right. I'd start by running something like this https://github.com/raspberrypi/picamera2/blob/main/examples/capture_dng_and_jpeg.py

Check the the JPEG is what you expect, and you can also convert the DNG (e.g. dcraw -w full.dng, and use sudo apt install dcraw if you don't have it) to check the raw is sensible too.

Once that's working, add something like data8 = r.make_array("raw") to check that your analysis of the raw data is giving what you expect. If the DNG file converts OK, then the raw data must be good.

Also, did you say what kind of a Pi you have?

JosephDemarest commented 4 months ago

I have a Pi 4 and a Pi 5. I see the same issue on both. These images are taken using that script, using the HQ camera, with the sensor cap installed. Not sure where to go from here. images.zip

JosephDemarest commented 4 months ago

I took those on the pi 5 by the way, I'm transitioning from 3s and 4s to all 5s. These things are fast as F

njhollinghurst commented 4 months ago

The sensor has some un-illuminated pixels surrounding the image, and uses them to offset the black level to drive it to the nominal value. This is to correct for variations across the sensor, temperature changes, etc. It probably also does some "hot pixel" correction. All this happens before the Raspberry Pi receives the data. So you can't get the "true" (uncorrected) pixel values, at least not with the sensor register settings in the standard Linux driver.

Due to noise, pixels can vary above and below the black level. You might possibly see the noise variance increase with exposure, as (I'm guessing) dark current shot noise and uncorrected pattern noise may come to the fore(?)

It's not clear what explains the difference between the 256 you reported originally and the 46 mentioned above.

davidplowman commented 4 months ago

I have a Pi 4 and a Pi 5. I see the same issue on both. These images are taken using that script, using the HQ camera, with the sensor cap installed. Not sure where to go from here. images.zip

Hi again, I took a look at those files and they look OK to me. I'm guessing they're taken on a Pi 5 because Pi 5 bumps the raw images up to 16 bits and I think Pi 4 doesn't. Anyway, you can tell by looking at the black and white levels of the image. I used Python rawpy to have a snoop inside the DNG file:

>>> import rawpy
>>> import numpy as np
>>> raw = rawpy.imread("full.dng")
>>> raw.white_level
65535
>>> raw.black_level_per_channel
[4096, 4096, 4096, 4096]
>>> np.mean(raw.raw_image)
4113.934414253088
>>> raw.raw_image
array([[4176, 4160, 4016, ..., 4128, 4224, 4192],
       [4096, 4272, 4192, ..., 4304, 4096, 4160],
       [4128, 4048, 3872, ..., 4128, 4192, 4192],
       ...,
       [4240, 4048, 4224, ..., 4192, 4160, 4096],
       [4128, 4128, 4080, ..., 4112, 4048, 4096],
       [4256, 4048, 4192, ..., 3856, 4112, 3888]], dtype=uint16)

which all looks good to me. Are you able to reproduce that?

JosephDemarest commented 4 months ago

I have a Pi 4 and a Pi 5. I see the same issue on both. These images are taken using that script, using the HQ camera, with the sensor cap installed. Not sure where to go from here. images.zip

Hi again, I took a look at those files and they look OK to me. I'm guessing they're taken on a Pi 5 because Pi 5 bumps the raw images up to 16 bits and I think Pi 4 doesn't. Anyway, you can tell by looking at the black and white levels of the image. I used Python rawpy to have a snoop inside the DNG file:

>>> import rawpy
>>> import numpy as np
>>> raw = rawpy.imread("full.dng")
>>> raw.white_level
65535
>>> raw.black_level_per_channel
[4096, 4096, 4096, 4096]
>>> np.mean(raw.raw_image)
4113.934414253088
>>> raw.raw_image
array([[4176, 4160, 4016, ..., 4128, 4224, 4192],
       [4096, 4272, 4192, ..., 4304, 4096, 4160],
       [4128, 4048, 3872, ..., 4128, 4192, 4192],
       ...,
       [4240, 4048, 4224, ..., 4192, 4160, 4096],
       [4128, 4128, 4080, ..., 4112, 4048, 4096],
       [4256, 4048, 4192, ..., 3856, 4112, 3888]], dtype=uint16)

which all looks good to me. Are you able to reproduce that?

Yes, I am able to reproduce this. since it's 16 bit i subtract 4096 from each value? What about the negatives?

davidplowman commented 3 months ago

That's right. And you need to clamp anything that would be negative to zero.

JosephDemarest commented 3 months ago

Thanks! I appreciate it!