Open Snelso91 opened 1 week ago
I wonder if this is the same problem as https://github.com/raspberrypi/picamera2/issues/1125.
You could try the workaround mentioned there (to use the PersistentAllocator
), which may be possible if you have just a couple of configuration that you are switching between. Of course, I realise that on a Pi Zero 2 you may be rather memory constrained.
@davidplowman thanks, I'll look at trying that soon, although I was curious if commenting out the line in /lib/udev/rules.d/60-dma-heap.rules
would be better for fixing the root cause of the memory leak as mentioned in #1102 ?
Since you mentioned how PersistentAllocator
would be more appropriate for the case of switching back and forth between 2 configs in #1125, I had a related question to do with performance:
If my goal is to switch from the video config to the still config, capture a jpeg, and then switch back to the video config, is switch_mode_and_capture_file
the quickest way to do that?
When I say quickest, I'm specifically referring to the total time it takes for whatever commands or function to capture the raw image data (not how long it takes for the jpeg to be captured), since at the point the image data is in RAM, then the image is "locked in", and can't be changed further by things moving in the camera's FOV.
For example, if the pipeline of switch_mode_and_capture_file
is something like Raw Image Data -> Memory -> JPEG Encoder -> JPEG file, then I would expect that that probably would be the quickest method since the raw image data is captured first without any delay.
But if it is something like Raw Image Data -> JPEG Encoder -> JPEG file, then how long it takes to capture the frame data is dependent on the slow JPEG encoder instead of a faster operation like just storing the image data in memory?
The reason I'm asking is because there is currently a delay in the chain from when the printer sends a signal to capture a timelapse frame, the HA server receiving it and sending an MQTT packet, and then finally when the pi receives the packet, the time it takes to switch modes and capture the frame.
So if I could optimise the last part, the time it takes to capture the image data (not the jpeg), then that would reduce the total time I need to have the printer stay parked (since the printer can't receive feedback from the pi, the printer just waits in the park area a fixed amount of time for the signal to propagate and the pi to take an image)
On anything other than a Pi 5, the vidbuf_cached
heap must point to the linux,cma
heap.
If you're going to do a mode switch for the capture, then switch_mode_and_capture_file
is probably best. Using the PersistentAllocator
might improve it slightly, because you don't have to deallocate and reallocate buffers all the time.
Note that stopping the camera and switching to a different mode is relatively time-consuming, normally I would expect a few hundred milliseconds. The other option would be to run permanently in the capture mode - if you can run at 10 or 15fps then that's a much lower latency. The catch is that you might not have enough memory. Running with 2 buffers might be enough for occasional captures, you'd have to try it - otherwise you'd need 3. If you don't need a preview then you could use 24-bit RGB instead of 32-bit to save some space. You might even be able to use YUV420 (even less memory), though you'd probably need a software conversion to RGB for saving as JPEG (though OpenCV has a routine for that).
@davidplowman ok thanks, I'll leave the switch_mode_and_capture_file
as-is for now and start using PersistentAllocator
.
The modified code hasn't had an issue when using the PersistentAllocator
yet, but I've only run it for a few hours so I'll have to wait a bit longer to say definitively whether the workaround worked.
Regarding switching speed, a while ago I did time how long it took to run:
picam2.stop_encoder(encoder)
picam2.switch_mode_and_capture_file(still_config, filename)
And it usually came out around 1.5 secs (with some variance) before using PersistentAllocator
, but I know that this includes the time to create the jpg image from the raw data.
So were you're implying that the time to run stop_encoder()
, switch modes, and capture the image into RAM is usually less than 1 sec?
In other words, if the difference in time between capturing an image to RAM with stopping the encoder and switching config, vs staying in the high res config and capturing an image to RAM is <0.5 sec, then I think it would probably be fine leaving it with the dual video and stills configs in order to get the higher framerate video.
This is because the total delay I'm looking at for the whole signal path is on the order of 5.15 secs currently (the vast majority comes from the printer being slow to send MQTT messages to HA as far as I can tell), so taking 0.5 secs off that might be worth less than losing smooth video.
Also, doing it this way (dual configs) would allow me to use some features such as HQ noise reduction for the still images, while still using the normal fast denoising for the video mode, whereas if you only used a single config you would probably have to use fast denoising in order to get a decent framerate for the video, at the expense of the denoising quality for the stills?
Regarding memory, currently I don't explicitly set the format of the image in either config, but I assumed that because I don't configure or call for a preview at any point then it would automatically select either RGB24 or YUV?
If not, then I suppose setting RGB24 for both would be best, or would setting YUV for the video mode be better? Obviously YUV would use less memory than RGB24, but from what I understand you are throwing away chroma data in order to do so, whereas in the case of RGB32->RGB24 you aren't losing any info since an image from a camera doesn't contain alpha data right?
Although maybe for MJPEG. using YUV has no quality difference since MJPEG stream isn't the best quality to begin with (some blockiness) when using the H/W encoder?
And regarding the actual image format for RGB24, does it make any difference for the still jpeg config, or the video MJPEG config whether I select BGR888, or RGB888? If it doesn't make a difference, is there a "normal" or default selection that I should choose out of those 2 options?
@davidplowman ok thanks, I'll leave the
switch_mode_and_capture_file
as-is for now and start usingPersistentAllocator
. The modified code hasn't had an issue when using thePersistentAllocator
yet, but I've only run it for a few hours so I'll have to wait a bit longer to say definitively whether the workaround worked.Regarding switching speed, a while ago I did time how long it took to run:
picam2.stop_encoder(encoder) picam2.switch_mode_and_capture_file(still_config, filename)
And it usually came out around 1.5 secs (with some variance) before using
PersistentAllocator
, but I know that this includes the time to create the jpg image from the raw data.So were you're implying that the time to run
stop_encoder()
, switch modes, and capture the image into RAM is usually less than 1 sec? In other words, if the difference in time between capturing an image to RAM with stopping the encoder and switching config, vs staying in the high res config and capturing an image to RAM is <0.5 sec, then I think it would probably be fine leaving it with the dual video and stills configs in order to get the higher framerate video. This is because the total delay I'm looking at for the whole signal path is on the order of 5.15 secs currently (the vast majority comes from the printer being slow to send MQTT messages to HA as far as I can tell), so taking 0.5 secs off that might be worth less than losing smooth video.
It might be worth timing some of this stuff for yourself. switch_mode_and_capture_file
obviously stops, reconfigures and starts the camera twice, whilst saving a JPEG in the middle. You could write out this code explicitly, to see how the bits behave. Maybe like this:
picam2.stop()
picam2.configure(still_mode)
picam2.start()
req = picam2.capture_request()
req.save('main', "test.jpg")
req.release()
picam2.stop()
picam2.configure(video_mode)
picam2.start()
That's pretty much what switch_mode_and_capture_file
does, I don't think the above should be very different.
Also, doing it this way (dual configs) would allow me to use some features such as HQ noise reduction for the still images, while still using the normal fast denoising for the video mode, whereas if you only used a single config you would probably have to use fast denoising in order to get a decent framerate for the video, at the expense of the denoising quality for the stills?
That's true. On Pis other than Pi 5s, doing the higher quality (and slower) stills denoise may reduce the framerate. Note that you can change the denoise setting while the camera is running, but the catch will be knowing on which frame the change has actually taken effect.
Regarding memory, currently I don't explicitly set the format of the image in either config, but I assumed that because I don't configure or call for a preview at any point then it would automatically select either RGB24 or YUV?
If not, then I suppose setting RGB24 for both would be best, or would setting YUV for the video mode be better? Obviously YUV would use less memory than RGB24, but from what I understand you are throwing away chroma data in order to do so, whereas in the case of RGB32->RGB24 you aren't losing any info since an image from a camera doesn't contain alpha data right?
Preview/video modes select 32-bit ARGB formats by default, but only because they're usually easier to display. If you're not displaying them, 24-bit RGB would be more efficient. YUV420 is even better, and the H.264 video encoder will accept it directly.
Unfortunately Python mostly doesn't have much support for YUV420, so you'd have to convert it to 24-bit RGB for saving as a JPEG. OpenCV has a function cv2.cvtColor
that will do this, like so:
rgb = cv2.cvtColor(yuv420, cv2.COLOR_YUV420p2RGB)
Although maybe for MJPEG. using YUV has no quality difference since MJPEG stream isn't the best quality to begin with (some blockiness) when using the H/W encoder?
MJPEG should accept RGB and YUV420 image formats but, as you say, quality is worse for a similar bitrate compared to H.264.
And regarding the actual image format for RGB24, does it make any difference for the still jpeg config, or the video MJPEG config whether I select BGR888, or RGB888? If it doesn't make a difference, is there a "normal" or default selection that I should choose out of those 2 options?
I don't think the choice of BGR or RGB makes any difference. My rule of thumb is that reds and blues invariably come out the wrong way round and you end up swapping colours until it's right. Always swap colours by requesting the other format, rather than doing software conversions.
Describe the bug I have a python program called bambu_rpi_webcam.py that I created in order to run an MJPEG stream 24/7 in video mode and then occasionally switch to still mode to take a full resolution still for a timelapse when a signal is received from home assistant via MQTT. The MJPEG server functionality is copy and pasted from the example code mjpeg_server_2.py.
To do the alternating functionality of streaming MJPEG and taking high res still images, I have two configs setup: A lower resolution (1480, 1080) 30fps video config:
A high resolution (3552, 2592) stills config:
In order to quickly switch from the MJPEG mode to the stills mode, I use the following function, which gets called by a callback function when the MQTT signal is received:
So you can see the core of this function is to:
switch_mode_and_capture_file
to quickly switch to the stills config, take the image and then switch backThis function works completely fine the first few times it is called. However, after approx. 5 hours of running the program, and about 267 times of the MQTT handler calling this function, the program inevitably silently locks up with the error:
OSError: [Errno 12] Cannot allocate memory
If you look below you can see the full traceback for this error, and it is clear that it is caused whenswitch_mode_and_capture_file
is called.In order to try and fix this error I tried following advice from #1102, to update my OS using:
I also updated picamera2 and others using:
However neither of these things helped get rid of the problem.
In #1102, I also saw a mention of a potential solution of commenting out this line in
/lib/udev/rules.d/60-dma-heap.rules
:SUBSYSTEM=="dma_heap", KERNEL=="linux,cma", SYMLINK+="dma_heap/vidbuf_cached", OPTIONS+="link_priority=-50"
And in #1125, I saw a mention of using a "persistent allocator" to solve the problem.
Since these were 2 different solutions and different bugs, I wasn't sure if they were the correct solution for my exact issue. In addition, I also saw a comment https://github.com/raspberrypi/picamera2/issues/1102#issuecomment-2367550873 that said that #1102 was to do with the Pi 5, and so a new bug report should be opened for devices that aren't the Pi 5 And so for these 2 reasons I have opened this new issue.
To Reproduce I have found it hard to reproduce the behaviour on demand by using a single test script. I tried using this test python script, but even with 10,000 iterations I could not force the bug to show up: test_stability.py
Whereas if I let the original bambu_rpi_webcam.py program run for 5 hours and only 267 iterations then it will cause the bug.
Expected behaviour I was expecting
switch_mode_and_capture_file
to be able to switch back and forth between video and still modes an infinite number of times without crashing due to a memory leak/bug. Clearly the problem is not a general lack of memory otherwise it would not work the first time, but this issue only occurs after several iterations of usingswitch_mode_and_capture_file
.Console Output, Screenshots The traceback when the error occurs is as follows:
Below is the full journalctl log of the program up until the point I noticed it was no longer working because of the memory error: journalctl.log
Hardware : Raspberry Pi Zero 2 W Raspberry Pi Camera Module V3 Wide
Additional context OS: Raspberry Pi OS Bookworm OS version (result of
uname -a
):Linux p1s-cam 6.6.51+rpt-rpi-v8 #1 SMP PREEMPT Debian 1:6.6.51-1+rpt3 (2024-10-08) aarch64 GNU/Linux
picamera2 version (result of
dpkg -s python3-picamera2 | grep Version
):0.3.22-2
As per #1102: Result of
ls -l /dev/dma_heap
:Contents of
/lib/udev/rules.d/60-dma-heap.rules
: