Closed q5270114 closed 1 year ago
Your quoted code is insufficient for anyone to reproduce your test case. You've given no information as to what resolution you're asking to encode for a start.
The 2 queues should normally be treated independently, not with DQBUF being called on both based on the one event. Due to pipelining and internal buffering the input frame is finished with quite a while before the encoded frame is ready. At 1080p, memory says that the full frame encode takes around 45ms.
Sorry, my test case is based on 1080P encoding.When I set the encoder to VBR mode, this test case can encode H264 videos with 1080p close to 60fps. Do you mean that 1080p requires 45ms per frame in CBR mode? So, in CBR mode, the encoder can only encode 1080P 22FPS, right?
Do you mean that 1080p requires 45ms per frame in CBR mode? So, in CBR mode, the encoder can only encode 1080P 22FPS, right?
No, it is pipelined. The hardware need images in a particular format, so the first stage is an image conversion. The second stage is generating motion estimation values for all potential options. The third is the choice of motion vector The fourth is entropy coding (CAVLC or CABAC)
Memory says that steps 2&3 are done as one job, but that still leaves a potential pipeline of 3 frames. Max frame rate is the time taken for the longest of the steps, not the total. What does fail badly with a pipelined codec is feeding one frame in and waiting for the output frame before feeding the next one in.
You've created this as an issue under libcamera-apps. Are you actually using libcamera-apps, or just the V4L2 encoder? If libcamera-apps, why the need to create your own wrapper around the H264 encoder?
Thanks for your reply, we have a requirement to use CBR mode hard encoder encoded h264 (1080P 30FPS) live broadcast, and we need to adjust the bit rate in real time. Since there are few hard coding data about raspberry pie, I can only choose to modify the examples in libcamera apps to support our own business implementation. Do you have any good suggestions for me to modify my test cases to achieve this goal? By the way, when I use the ffmpeg command line tool to code, the hard encoder's encoding speed is relatively fast, and the bit rate control is very stable
Do you mean that 1080p requires 45ms per frame in CBR mode? So, in CBR mode, the encoder can only encode 1080P 22FPS, right?
No, it is pipelined. The hardware need images in a particular format, so the first stage is an image conversion. The second stage is generating motion estimation values for all potential options. The third is the choice of motion vector The fourth is entropy coding (CAVLC or CABAC)
Memory says that steps 2&3 are done as one job, but that still leaves a potential pipeline of 3 frames. Max frame rate is the time taken for the longest of the steps, not the total. What does fail badly with a pipelined codec is feeding one frame in and waiting for the output frame before feeding the next one in.
You've created this as an issue under libcamera-apps. Are you actually using libcamera-apps, or just the V4L2 encoder? If libcamera-apps, why the need to create your own wrapper around the H264 encoder?
each frame consumes approximately 100ms.
respberry pi 4b. os:
this is my encode code :