Closed guardrex closed 7 years ago
Hi Luke,
Yup - to address your first attempt: when you set the resolution of the camera, that's the resolution the camera will attempt to run at (or more precisely, that will influence the resolution the camera will run at; the first section in Camera Hardware has the gory details). The resize
parameter does literally that: applies a resizer in the image/video processing pipeline.
When you set the camera to 320x240 (and I'm guessing 30fps), the camera will switch to 1296x972 resolution and down-scale to 320x240. Recording with resize=2592x1944 will stick a resizer in the pipeline just before the H.264 encoder which will blow that 320x240 output up to 2592x1944.
Conversely when you set the camera to 2592x1944 (and presumably 15fps), the camera will switch to the full 2592x1944 mode. It's impossible to record H.264 directly at this resolution (not enough GPU oomph for the encoder as I understand it), but you can throw a resizer in there, as you have done, to down-scale the full frames to 320x240 before the encoder has to deal with it, and that allows the recording to work.
Now, at this point you should be able to simultaneously capture an image using the video port without dropping any frames, albeit at a slightly lower quality than using the still port (and indeed this seems to work happily for me in 1.5 and 1.6), but with the full resolution of the camera (in this case 2592x1944).
If you want to capture with the still port while recording, things are a tad more complicated. Capturing JPEGs on the still port requires opaque encoding. Still port encoding can only be changed by disabling and re-enabling the camera so obviously it can't be done while recording. To work around this, one might attempt to force a still-port JPEG capture prior to recording to ensure the still port's encoding is set correctly. However, this won't work because start_recording
forces still-port encoding to I420 prior to recording video. This reason this was introduced (way back in #59) was another limitation: the resizer doesn't work with opaque encoding. In order to ensure the widest variety of formats (unencoded ones in this case) and options were available for captures from the still port while recording, the start_recording
method sets the still port to I420 encoding prior to starting the recording.
So ... if you want to capture images from the still port while recording they have to be unencoded (YUV, RGB, etc.) images:
import picamera
with picamera.PiCamera() as camera:
camera.resolution = camera.MAX_RESOLUTION
camera.framerate = 15
camera.start_recording('video.h264', resize=(320, 240))
camera.wait_recording(10)
camera.capture('foo.data', 'yuv')
camera.wait_recording(10)
camera.stop_recording()
However, you can always capture JPEGs from the video port while recording:
import picamera
with picamera.PiCamera() as camera:
camera.resolution = camera.MAX_RESOLUTION
camera.framerate = 15
camera.start_recording('video.h264', resize=(320, 240))
camera.wait_recording(10)
camera.capture('foo.jpg', use_video_port=True)
camera.wait_recording(10)
camera.stop_recording()
Ah! I see ...
I was trying avoid the video_port on purpose to get the same high quality images that I would ordinarily get from the still port without a video capture in progress. Maintaining the image quality is very important for my requirements.
I didn't try a capture in yuv or rgb ... I'm looking forward to trying that later today. I certainly don't mind having the images in yuv/rgb if I can keep that resized 320x240 video streaming without dropped frames. I'll leave a final comment after I try your tip.
Thanks again for all of your help, Luke
Hmm ... generally speaking I would say if that if you're capturing from the still port while recording you will definitely see dropped frames. However, having just tried the little recipe for a YUV capture while recording above I couldn't see any during the resulting video.
Two possible explanations for this: 1) Because the camera's running at 15fps (to enable full resolution), the mode switch happened fast enough not to be noticeable at the lower framerate, or 2) the mode switch that still mode uses is to the full resolution, but the camera's already running at full resolution so the mode switch doesn't occur (or doesn't cause the camera to drop frames). Or some combination of the two explanations?
Certainly whenever I've tried still port captures when recording at lower resolutions, there are quite obvious dropped frames in the resulting video, but this setup doesn't seem to exhibit the issue (or it's not so bad that it's noticeable to me anyway!)
That's great ... and I'm just trying to stream over the Net to a VLC webpage plugin. A few dropped frames here and there would be fine. I'm also fine with the 15fps. It's far more important to me (from a security perspective) to have high resolution, clear shots when (a) motion is detected by the Pi, (b) the user clicks a button to snap an image, or (c) a regular timed shot is taken by the Pi.
I have a very good feeling that this is going to work out ...
I've run into an issue when loading the image into an io.BytesIO() for processing:
io_stream1 = io.BytesIO()
camera.capture(io_stream1, format='rgb', use_video_port=False, quality=85)
Results in:
mmal: mmal_vc_port_enable: failed to enable port vc.ril.resize:in:0(I420): ENOMEM
mmal: mmal_port_enable: failed to enable connected port (vc.ril.resize:in:0(I420))0x7c17e0 (ENOMEM)
mmal: mmal_connection_enable: output port couldn't be enabled
The trapped exception is "Out of memory"
Is this because I'm trying to save a huge rgb image file to a bytes var that can't handle the size of the image ... or perhaps hit a Pi RAM limit?
I tried saving to a file to see the image size, but ...
with picamera.PiCamera() as camera:
camera.resolution = (2592, 1944)
camera.capture('/home/pi/Desktop/img.data', 'rgb')
Still fails with the out-of-memory error. I tried a restart of the Pi. I tried with yuv. No dice. I tried this code with jpeg just to make sure the code runs, yes, I received the file in JPEG. Any ideas on my out-of-memory with rgb and yuv?
Okay, the issue with RGB is known: basically you can't do full resolution image captures in RGB - there's just not enough memory in the GPU for the buffers (might be possible with huge memory splits but I haven't tried it; those tests are just disabled in the test suite). I'm surprised you're seeing the issue with YUV though - the test suite does include full resolution YUV captures though it occurs to me I probably don't have a test for full resolution capture while performing resized video recording from full resolution in the background! It's possible that might either be beyond the Pi's capabilities (from the still port at least, we know it works from the video port) or simply require a larger GPU memory split.
For now, I'd recommend trying to bump up the GPU memory split to something like 256 (if you're on a model B - if you're on a model A there's not much you can do). If that doesn't work you may have to fall back to using video-port based captures (it may not be as good quality as still port, but at full res it's not bad, and at least we know it works!)
Yes, taking the gpu_mem to 256 did the trick. It took YUV images at max resolution. However, 192 (apparently the limit for a Model A) failed at max resolution. I did some testing at 192 and could capture images up to 2400x1560. There was no difference between saving to a file or saving to a BytesIO object. This testing didn't occur while trying to record video, and I speculate that doing this while recording video is going to push the resolution further down.
Even if I were ok with the compromise on resolution, I'm still left having to convert YUV into something that PIL (via Pillow) can handle ... and it has to be done with a BytesIO object holding the image data.
It looks like I've pressed the little Pi for all she's got. Looking at my requirements, the most important item is snapping max resolution images with the still port (for maximum quality). I'll have to compromise on the video streaming to work around the hardware limitations. If anyone from the Pi Foundation is listening, it would be helpful in the future (Version 3 Pi ??) to have more RAM (thus more GPU RAM) for the Model A. It looks like my needs would have been met by a 512MB Model A.
I do see in docs the bit on picamera.array and the code sample you posted for YUV->RGB.
I'll close this question out. I still have some more thinking to do, but I understand my options much better now. Thanks again for your help.
Sorry for reviving an old thread, but I'm trying to do something very similar to this and I wondered what your eventual solution was? @GuardRex how did you end up capturing and storing high quality images? I am able to stream h264 video and simultaneously capture YUV images on the still port, but I can't figure out how to do anything useful with the YUV files. Ideally there would be a fast (GPU accelerated?) way to jpeg encode them before storage, but if I need to post-process them I think that would be ok.
Thanks for any insight you can share!
@bennettrogers I ended up just snapping the image from the still port while using MJPEG streaming. This ended up meeting my needs the best: For my system, the quality of the video is not all that important ... the high-res image snaps are more important.
By setting up a MJPEG stream, I'm able to keep everything (streaming video and hi-res image snaps) all in an image-snapping scenario without having to jump back and forth between H.264 and snapping images or without degrading the H.264 too much due to camera mode changes while snapping images.
For the MJPEG streaming, I'm using JPEG image snaps (of course), the video port of the camera, a quality setting of 10, no thumbnail, and a res of 320x240. No matter if the MJPEG is actively streaming or not, I can jump in anytime and take a hi-res snap ... a JPEG, using the still port, a quality setting of 85, thumbnail with it, and at full camera resolution 2592x1944. It works pretty well.
As the for the specifics on captures (hi-res images), I just take JPEG's from the camera into a io.BytesIO object. I use PIL for image mods prior to saving the snaps. I can't really say anything here (publicly) about how the MJPEG stream works or what happens with the snaps for security reasons, but you can find many good resources online that will help you stream MJPEG off the Pi. Then, you can just snap hi-res as you go.
[EDIT] As for YUV, if PIL can't handle them, I don't know what to suggest. Once I figured out how to make things work for my setup, I didn't go further with YUV.
Thanks for the reply. What framerate are you getting for the mjpeg stream? The quality of the stills is more important in my scenario as well, but I would love to get the stream to be of reasonable quality. A quick test was getting me only 2-3fps (vs at least 25fps for h.264), but I haven't tried tweaking the settings much yet. If I can play around enough to figure out a configuration that will give me a decent video stream, this might end up being sufficient. I tested the approach of using the video port to capture the stills, and the loss of FOV and metadata wasn't acceptable.
@waveform80, if I stick to the route of h.264 streaming and capturing raw stills, do you have any thoughts on the most efficient way to encode the YUV images into jpeg? I imagine I could figure out something with the PiYUVArray, but that sounds resource intensive if done naively (which is probably what would happen since I'm new to this stuff). Is there a way to add a step to the pipeline to encode to jpeg with the GPU?
@bennettrogers I'm not sure on the framerate. I don't set a header coming through with it from picamera, and I can't seem to get it from Fiddler or Google Dev Tools.
@bennettrogers - just trying to clear out some old tickets and realized I never replied to this, sorry!
With an MJPEG video recording you ought to get similar framerates to H264 - i.e. 25fps should be no problem. Incidentally, this isn't a matter of taking JPEGs one after another, I simply mean call start_recording
with format set to mjpeg
. You can split frames out of the result by looking for the JPEG magic sequence (ff d8). Here's some gists to play around with.
@waveform80 Realised that the default example uses
camera.start_recording('video.h264', resize=(320, 240)) camera.wait_recording(10) camera.capture('foo.data', 'yuv') camera.wait_recording(10)
In this case the response time for an image capture is 10s (if let's say i'm triggering the capture function based on some condition). To reduce the response time, one would need to decrease the parameter in wait_recording
, is the only way to reduce the response time?
The wait_recording is simply there as a demonstration - it's not required. Eliminate it entirely if you want (wait_recording
is purely an alternative to sleep
for providing timed delays but with one important addition: it terminates immediately if something goes wrong in the recording, like running out of disk space).
Hello Dave,
The Encoders section and recipes don't quite match what I'm trying to accomplish. I want to snap high quality images at high resolution with the still port while capturing lower resolution video from the video port. I don't care about dropped frames if a mode change is needed as long as the video capture isn't killed. I've tried to accomplish this by doing the following:
Set the camera resolution to 320x240 Start video capture Snap image with resize=2592x1944 using video_port=False
Problem: Judging by the poor quality of the image taken, the use of resize seems to be resizing a capture at the camera resolution of 320x240 to 2592x1944 ... not actually taking the still image at 2592x1944.
If I flip this and hit it the other way ...
Set the camera resolution to 2592x1944 Start video capture with resize=320x240 Snap image using video_port = False
Problems: (1) Many dropped frames in the video (even when I'm not calling for a
.capture()
image), (2) When I attempt to snap the image, it throws an exception: "Recording is currently running."So far, it seems that the only way to do something like this (along the lines of my first attempt) would be to capture video at high resolution (>=1292x730) so that the resize with
.capture()
will not result in a great loss of image quality. Is that the only way to pull this off?