Open konstantins-tracxpoint opened 4 years ago
The motion vectors between frame N and N-1 frame will always be delivered as the buffer after the encoded data for frame N.
What does a PTS mean for motion vectors? Should it be the PTS of the first frame in the comparison or the second? The output buffer is not a self-contained thing. I may be able to add the same PTS as used by the encoded frame, but I don't believe that is the right information.
What do you mean by series marking?
"...The motion vectors between frame N and N-1 frame will always be delivered as the buffer after the encoded data for frame N." It's true: here's my log: ' Read from output buffer and wrote to output file 4127/195840, frame 526, ts: 522 Read from output buffer and wrote to output file 2108/195840, frame 526 [vectors], ts: 0 Read from input file and wrote to input buffer 195840/195840, frame 843 Read from output buffer and wrote to output file 4150/195840, frame 527, ts: 523 Read from output buffer and wrote to output file 2108/195840, frame 527 [vectors], ts: 0 Read from input file and wrote to input buffer 195840/195840, frame 844 '
But, unfortunately , I can't match it to frames from *.h264 match code(python): https://drive.google.com/open?id=1dCo-qJFMSuu1T0ZaOGWJnVkqdaLBuCMB match results: https://drive.google.com/open?id=1r2wz2f6NFSWg-wyFEu1qKhbRCURL-0jK Best match with delay 315 frames ( ~ processing delay of encoder ?)
..PTS.. I mean correct pts for motion vector +- some kind of processing error for example : (frame_N_pts + frame_N_1_pts)/2 +- some error
.. Mark series .. I mean mark input packet of encoder ( as I did in code)
rpi-encode-yuv (modified): https://drive.google.com/open?id=1Wi9l7iix4TGNIXi9ZioTo3Gj-VVPmpG6
Hi all, i writing some application that needs real synchronized motion vectors. As a base app i took "rpi-encode-yuv" https://github.com/tjormola/rpi-openmax-demos
Just enable motion vectors using command:
Stages of experiment: 1) Took series of images 2) Write to yuv ("gst-launch-1.0 multifilesrc location="input/in00%04d.jpg" index=1 ! jpegdec ! videoscale ! 'video/x-raw,width=480,height=270' ! videoconvert ! 'video/x-raw,format=I420' ! filesink location=dump.yuv") 3) rpi-encode-yuv < dump.yuv > 1.h264
Code rpi-encode-yuv modified to also save motion vectors.
After some days of experiments, i understand what this(sync motion vectors with h264) is impossible. Reasons 1) Where's no pts information in motion vector buffer 2) Where's no way to enable some kind of series "marking"