Open dasl- opened 8 months ago
I did a little more digging. Here are the modes of my camera:
% python3
>>> from pprint import *
>>> from picamera2 import Picamera2
>>> picam2 = Picamera2()
...
>>> pprint(picam2.sensor_modes)
...
[{'bit_depth': 10,
'crop_limits': (768, 432, 3072, 1728),
'exposure_limits': (9, None),
'format': SRGGB10_CSI2P,
'fps': 120.13,
'size': (1536, 864),
'unpacked': 'SRGGB10'},
{'bit_depth': 10,
'crop_limits': (0, 0, 4608, 2592),
'exposure_limits': (13, 77208384, None),
'format': SRGGB10_CSI2P,
'fps': 56.03,
'size': (2304, 1296),
'unpacked': 'SRGGB10'},
{'bit_depth': 10,
'crop_limits': (0, 0, 4608, 2592),
'exposure_limits': (26, 112015443, None),
'format': SRGGB10_CSI2P,
'fps': 14.35,
'size': (4608, 2592),
'unpacked': 'SRGGB10'}]
It looks like the last 2 modes will give me the full field of view, based on their crop_limits
values. But the have different size
values - middle mode has a lower size: (2304, 1296)
.
I updated my code:
raw={'size': (2304, 1296)},
Now it runs a bit faster. Improvement from 0.12 seconds to 0.09 seconds:
Image capture took 0.093 s. (270, 480, 3)
I wonder if there's any way to make it go even faster though :)
Today I did some more digging and learned about the FrameRate
option: controls={'FrameRate': 50}
. Putting this into my script, I am able to go even faster. Here is my updated script:
import picamera2
import time
picam2 = picamera2.Picamera2()
config = picam2.create_still_configuration(
main={"size": (480, 270)}, # scale down the image, but maintain the full field of view
raw={'size': (2304, 1296)},
buffer_count=2,
controls={'FrameRate': 50},
)
print(f"Using config: {config}")
picam2.configure(config)
picam2.start()
while True:
loop_start = time.time()
output_orig = picam2.capture_array()
img_capture_elapsed_s = round(time.time() - loop_start, 3)
print(f"Image capture took {img_capture_elapsed_s} s. {output_orig.shape}")
This improves the speed from 0.09 seconds to ~0.02 seconds:
Image capture took 0.023 s. (270, 480, 3)
However, at this speed, I have noticed that the captured images are much darker when setting the FrameRate to high values. I guess this makes sense - presumably there is less time to expose the image to light at a higher frame rate.
Is there any way to get faster speeds while still having a bright image?
In case it's helpful, here's the config that my script prints out when I run it:
Using config: {'use_case': 'still', 'transform': <libcamera.Transform 'identity'>, 'colour_space': <libcamera.ColorSpace 'sYCC'>, 'buffer_count': 2, 'queue': True, 'main': {'format': 'BGR888', 'size': (480, 270)}, 'lores': None, 'raw': {'format': 'SBGGR10_CSI2P', 'size': (2304, 1296)}, 'controls': {'NoiseReductionMode': <NoiseReductionModeEnum.HighQuality: 2>, 'FrameDurationLimits': (100, 1000000000), 'FrameRate': 50}, 'display': None, 'encode': None}
Here's an example of how dark the image is when I set controls={'FrameRate': 50}
:
Whereas it's much brighter if I omit that line of code:
Hi, a "still" configuration is generally more aimed towards higher quality and slower captures. A Pi 4, for example, will perform a relatively slow software denoise operation on all the images. Additionally, it defaults to only 1 buffer (to save memory for lower-end platforms, thinking that framerate isn't too important) - you've increased this to 2 but going higher might help more.
I would consider trying create_video_configuration
instead. This is geared towards continuous image streams at higher framerates. It will use a fast version of the software denoise, and many more buffers.
You've certainly done the right thing in forcing the choice of the 2x2 binned mode, which can run at over 50fps.
As regards the observed darkness, have a look at the metadata on the capture images (use picam2.capture_metadata()
). In particular, look at the ExposureTime
and AnalogueGain
fields. I'm guessing the exposure time is now limited to 20ms (1/50s of course) which is wasn't previously. There may be some scope to increase the analogue gain further to mitigate that, though you'll get noisier images. But the first step is just to compare the exposure/gain values with/without the fast framerate.
Hi, I complete David's answer.
Be careful after picam2.start() it is better to wait for the camera to stabilize exposure, gains, ... time.sleep(2)
Then to measure the througput I advise you to make an average loop:
Libcamera request loop only
count = 10
startTime = time.time()
for i in range(0,count) :
request = picam2.capture_request()
metadata = request.get_metadata()
request.release()
stopTime = time.time()
fps = float(count/(stopTime-startTime))
print("Metadata only", " Spf:", 1./fps, " Fps:", fps)
Then you can add the line
image = request.make_array("main")
For example for my HQ/RPI4 camera in maximum resolution 4056x3040 2 libcamera buffers Request only : 10fps Here it is a hardware/libcamera limitation Request + make_array : 5fps the make_array involves a copy of the buffer and therefore quite expensive And it will be even slower if we add a Jpeg or other conversion
But in half resolution 2028x1520 Request 44fps Request+make_array 18fps This seems normal as the arrays are 4 times smaller!
With an RPI5 Max resolution Request only : 12fps Request + make_array : 9.5 fps he gain is spectacular for processing but not so much for hardware
For the brightness problem as David says to understand what's happen you have to display the metadata, exposure and gains...
Extremely helpful research @dasl- @dgalland @davidplowman Thank you!!
Hi there, I'm using a Raspberry Pi Camera Module 3 with a Raspberry pi 4b. I'm trying to capture still images with the full field of view of the camera, but then scale them down to a smaller size. I'm trying to do this as fast as possible.
Here's my code:
I'm seeing
capture_array
performance timings of about 0.12 seconds. Is there any way to speed this up, or is this as fast as it goes? Here are the logs that this program generates:Thanks!