Open napamark opened 7 years ago
The Raspberry Pi does not seem able to encode in h264 format with this large of a frame size - I get error messages from the encoder.
Correct. The H264 encoder maxes out at 1920x1080.
8MPix YUV is 12MB/frame. 3 seconds at 15fps = 540Mbytes. That is far in excess of most storage solutions. I also hope you're using a Pi2 or Pi3 with 1GB of RAM on board. Even so there will be a fair amount of moving data around with buffer management.
I'll leave details of PiCameraCircularIO itself to waveform80. Potentially it is just the first_frame parameter in copy_to
that you need to specify as the default is PiVideoFrameType.sps_header which you'll never get with yuv.
https://github.com/waveform80/picamera/blob/master/picamera/streams.py#L723 / http://picamera.readthedocs.io/en/latest/api_streams.html#picamera.PiCameraCircularIO.copy_to
Yes, makes sense. I only really need 1-2 seconds of recording time, and I'm using a Pi3 with 1GB RAM. I looked into the YUV format some more and read the source code on the link you provided. Both seem to indicate that YUV does not have headers and such in the file format - it's simply concatenated image data, so finding a file header for a circular buffer would be impossible with this format. I really don't need a circular buffer structure - I just grabbed onto this PiCameraCircularIO call because it seemed simpler to use. My real use case is more like a one-shot trace capture. I press a button on my UI, and then my Python code turns on the video capture to RAM and flashes six LED's in about 1.5 seconds. Then the capture shuts down and can be written to Flash much more slowly. I just want to make sure the initial video capture is all done to RAM for speed. If there is a better method to do this linear, one-shot kind of capture to RAM, please suggest one! - Mark
If Python is your language of choice, then your approach seems pretty reasonable. I more wanted to point out that what you want to do is very memory hungry.
You are correct that there are no headers in YUV (or RGB), it is just raw image data. However you don't need any headers. H264 requires header bytes to describe the encoded data, and also an I-frame to avoid needing other frames for context. AIUI the circular buffer retains the length of each frame and writes it out. Seeking backwards to a header is unnecessary as each frame is self-contained You can save YUV from any frame and that should be fine.
Hello- I have been able to get the PiCameraCircularIO class to work in my Python code when using video format h264. When I just make one change to the Python code to switch the data format from h264 to yuv, the output video file size is then zero. From reading the documentation, it seems that the copy_to attribute relies on finding a file header in the h264 format to work properly. So my question is this - can I use the yuv video file format with a Circular memory buffer? I can't use the h264 format for a few reasons I think. First, I am using the PiCamera V2 product with the Sony IMX219, and in our testing we need to use the full resolution from the sensor (3280 x 2464 pixels). The Raspberry Pi does not seem able to encode in h264 format with this large of a frame size - I get error messages from the encoder. Second, we don't want any data compression at all. I tried to use PiCameraCircularIO because I only need to save a 2-3 seconds of video at 15 fps, and I think that keeping the data in RPi Memory during streaming will work best at that data throughput. When I don't use a circular RAM buffer and save directly to file, the data file write rate starts quickly for the first second, then starts dropping frames. I assume that's because the Flash Memory is slow, and the RAM intermediate buffer has been filled. So if there is a better function to call to utilize RAM buffers with yuv video format, please let me know.