Closed kevinzezel closed 1 year ago
Hi, Thank you for your question. To narrow down the issue, are you able to run the same pipeline through FFmpeg 4.4, which includes libx264? (Either use an static build https://johnvansickle.com/ffmpeg/ or follow https://xilinx.github.io/video-sdk/v3.0/using_ffmpeg.html#rebuilding-ffmpeg, while including libx264.) This way we can establish if there is a problem with timestamps at the source. Meanwhile, is it possible to share a test clip? Cheers,
Hi,
ffmpeg version:
<<<<<<<== FFmpeg xrm ===>>>>>>>>
No device set hence falling to default device 0
------------------i=0------------------------------------------
xclbin_name : /opt/xilinx/xcdr/xclbins/transcode.xclbin
device_id : 0
------------------------------------------------------------
ffmpeg version n4.4.xlnx.1 Copyright (c) 2000-2021 the FFmpeg developers
built with gcc 9 (Ubuntu 9.3.0-17ubuntu1~20.04)
configuration: --prefix=/opt/xilinx/ffmpeg --datadir=/opt/xilinx/ffmpeg/etc --enable-x86asm --enable-libxma2api --disable-doc --enable-libxvbm --enable-libxrm --enable-libfreetype --enable-libfontconfig --extra-cflags=-I/opt/xilinx/xrt/include/xma2 --extra-ldflags=-L/opt/xilinx/xrt/lib --extra-libs=-lxma2api --extra-libs=-lxrt_core --extra-libs=-lxrt_coreutil --extra-libs=-lpthread --extra-libs=-ldl --disable-static --enable-shared
libavutil 56. 70.100 / 56. 70.100
libavcodec 58.134.100 / 58.134.100
libavformat 58. 76.100 / 58. 76.100
libavdevice 58. 13.100 / 58. 13.100
libavfilter 7.110.100 / 7.110.100
libswscale 5. 9.100 / 5. 9.100
libswresample 3. 9.100 / 3. 9.100
I'm suspicious that it's the timestamp, because in RTSP direct from real cameras it's working.
I'm wanting to simulate several RTSP cameras through video files streaming through ffmpeg. I'm using this command:
./mediamtx &
ffmpeg -re -stream_loop -1 -i ./film/test.mp4 -vf "fps=5" -b:v 2M -maxrate 4M -bufsize 4M -an -g 10 -color_range pc -preset ultrafast -vcodec libx264 -colorspace bt709 -color_primaries bt709 -color_trc bt709 -pix_fmt yuv420p -rtsp_transport tcp -f rtsp rtsp://test:Test123@localhost:8081/stream_1080p_1
I'm using a mediamtx server (https://github.com/bluenviron/mediamtx)
The strangest thing is that if I remove the mpsoc_vcu_h264 and xvbm_convert it works perfectly.
This command works:
ffmpeg -loglevel debug -xlnx_hwdev 1 -y -vsync 0 -rtsp_transport tcp -i rtsp://35.18.1.218:8081/stream_1080p_1 -vf fps=5 -f rawvideo -pix_fmt rgb24 -
System Configuration
OS Name : Linux
Release : 5.11.0-1021-aws
Version : #22~20.04.2-Ubuntu SMP Wed Oct 27 21:27:13 UTC 2021
Machine : x86_64
CPU Cores : 12
Memory : 22530 MB
Distribution : Ubuntu 20.04.3 LTS
GLIBC : 2.31
Model : vt1.3xlarge
XRT
Version : 2.11.691
Branch : 2021.1
Hash : 3e695ed86d15164e36267fb83def6ff2aaecd758
Hash Date : 2021-11-18 18:16:26
XOCL : 2.11.691, 3e695ed86d15164e36267fb83def6ff2aaecd758
XCLMGMT : unknown, unknown
Devices present
[0000:00:1f.0] : xilinx_u30_gen3x4_base_2
[0000:00:1e.0] : xilinx_u30_gen3x4_base_2
Regards, Kevin
I have allowed public access to the rtsp link I am trying to process:
rtsp://teste:Teste1234@35.199.111.233:8081/stream_1080p_1 and rtsp://teste:Teste1234@35.199.111.233:8081/stream_720p_1
1080p@5FPS and 720p@5FPS
Thank you!
Hi,
Thank you for making those streams available. I am able to reproduce your results and it seems that there is a problem with our decoder not being able to get proper timestamp from these streams. (I am afraid the working case that you'd noted was using the built-in CPU based h264 decoder.) I've informed our engineering team of this issue and will provide update, once I get a reply.
You may want to consider the following workaround:
ffmpeg -y -hide_banner -loglevel fatal -rtsp_transport tcp -i rtsp://teste:Teste1234@35.199.111.233:8081/stream_1080p_1 -c:v copy -f mpegts - | ffmpeg -loglevel debug -xlnx_hwdev 1 -y -vsync 0 -vcodec mpsoc_vcu_h264 -i - -vf xvbm_convert,fps=5,setrange=limited -pix_fmt yuv420p -f rawvideo -frames:v 1500 - > /dev/null
. Looking at my VT1 instance the overhead due to 1st FFmpeg, is around %0.3.
Cheers,
Thank you for your help!
I have a Python script to catch the output from ffmpeg pipe and ingest into a Python machine learning pipeline but now it's not working. I think it must be the command that has more than one pipe.
Would you suggest me something in python script bellow?
Simple example:
import subprocess as sp
import numpy as np
cmd = ['ffmpeg', '-y', '-hide_banner', '-loglevel', 'fatal' ,
'-rtsp_transport', 'tcp', '-i', 'rtsp://teste:Teste1234@35.199.111.233:8081/stream_1080p_1', '-c:v', 'copy', '-f', 'mpegts', '-',
'|', 'ffmpeg', '-loglevel', 'debug', '-xlnx_hwdev', '1', '-y', '-vsync', '0',
'-vcodec', 'mpsoc_vcu_h264', '-i', '-', '-vf', 'xvbm_convert,fps=5,setrange=limited',
'-pix_fmt', 'rgb24', '-f', 'rawvideo', '-']
p = sp.Popen(cmd, stdout=sp.PIPE)
w = 1920
h = 1080
while True:
blob = p.stdout.read(w*h*3)
frame = np.frombuffer(blob, dtype=np.uint8).reshape((h, w, 3))
print(frame.shape)
Regards, Kevin
Hi, Broadly speaking, you have 2 choices if you want to stick with subprocess: 1) the frowned upon method of passing shell=True to Popen 2) running 2 sub processes and passing the output of the 1st one as an input to the 2nd.
p1 = sp.Popen(cmd1, stdout=subprocess.PIPE)
p2 = sp.Popen(cmd2, stdin=p1.stdout)
Other variations worth mentioning are: using multiprocessing module with Queue, passing messages through name pipe, etc. Cheers,
Thank you!
I noticed that I am not able to decode more than 60 videos 720p@5fps, because I get a message of insufficient memory.
[XMA] ERROR: ffmpeg xma-vcu-decoder VCU_INIT failed: device error: insufficient memory to create decoder instance
By the commands xbutil examine -d 0000:00:1e.0 -r memory
and xbutil examine -d 0000:00:1f.0 -r memory
I have more memory to use.
----------------------------------------------
1/1 [0000:00:1e.0] : xilinx_u30_gen3x4_base_2
----------------------------------------------
Memory Information
Memory Topology
Tag Type Temp(C) Size Base Address
[ 0] DDR[0] MEM_DRAM N/A 1023 MB 0x800000000
Memory Status
Tag Type Size Mem Usage BO count
[ 0] DDR[0] MEM_DRAM 2 GB 433 MB 690
Soft Kernel Memory Status
HostBO Count MapBO Count UnmapBO Count FreeBO Count
0 0 0 140732799458016
DMA Transfer Metrics
Chan[ 0].h2c: 4803 MB
Chan[ 0].c2h: 51662 MB
Chan[ 1].h2c: 491 MB
Chan[ 1].c2h: 5527 MB
----------------------------------------------
1/1 [0000:00:1f.0] : xilinx_u30_gen3x4_base_2
----------------------------------------------
Memory Information
Memory Topology
Tag Type Temp(C) Size Base Address
[ 0] DDR[0] MEM_DRAM N/A 1023 MB 0x800000000
Memory Status
Tag Type Size Mem Usage BO count
[ 0] DDR[0] MEM_DRAM 2 GB 433 MB 690
Soft Kernel Memory Status
HostBO Count MapBO Count UnmapBO Count FreeBO Count
0 0 0 140735764631424
DMA Transfer Metrics
Chan[ 0].h2c: 4849 MB
Chan[ 0].c2h: 54859 MB
Chan[ 1].h2c: 526 MB
Chan[ 1].c2h: 5847 MB
Code to reproduce:
import sys
import traceback
from time import sleep
import subprocess as sp
import numpy as np
import threading
hosts = ['34.151.215.57','34.95.216.195','35.199.111.233']
cameras = []
res = '720p'
for server_id in range(1,7):
for stream_id in range(1,11):
for h in hosts:
rtsp = f'rtsp://teste:Teste1234@{h}:{8080+server_id}/stream_{res}_{stream_id}'
cameras.append(rtsp)
print(f'Total cameras: {len(cameras)}')
online = 0
lock = threading.Lock()
def send_frame(rtsp, number, lock):
global online
port = 8080
if '720p' in rtsp:
params = {'width':1280, 'height': 720}
elif '1080p' in rtsp:
params = {'width':1920, 'height': 1080}
else:
raise Exception('Invalid stream type')
if (number % 2) == 0:
device_id = 0
else:
device_id = 1
cmd1 = ['ffmpeg', '-y', '-hide_banner', '-loglevel', 'fatal' ,
'-rtsp_transport', 'tcp', '-i', rtsp, '-c:v', 'copy', '-f', 'mpegts', '-']
cmd2 = ['ffmpeg', '-loglevel', 'fatal', '-xlnx_hwdev', f'{device_id}', '-y', '-vsync', '0',
'-vcodec', 'mpsoc_vcu_h264', '-i', '-', '-vf', 'xvbm_convert,fps=5,setrange=limited',
'-pix_fmt', 'rgb24', '-f', 'rawvideo', '-']
p1 = sp.Popen(cmd1, stdout=sp.PIPE)
p2 = sp.Popen(cmd2, stdin=p1.stdout, stdout=sp.PIPE)
try:
counter = 0
while True:
blob = p2.stdout.read(params['width']*params['height']*3)
frame = np.frombuffer(blob, dtype=np.uint8).reshape((params['height'], params['width'],3))
if counter == 10:
lock.acquire()
online += 1
lock.release()
counter = counter + 1
print(f'online: {online}')
except (KeyboardInterrupt, SystemExit):
print('Exit due to keyboard interrupt')
except Exception as ex:
print('Python error with no Exception handler:')
print('Traceback error:', ex)
traceback.print_exc()
finally:
sys.exit()
ps = []
for i,rtsp in enumerate(cameras[:100]):
p = threading.Thread(target=send_frame,args=(rtsp,i+1,lock,),daemon=True)
ps.append(p)
for p in ps:
p.start()
for p in ps:
p.join()
Regards, Kevin
Hi,
Couple of things to try:
1- Change the read size from params['width']*params['height']*3
to params['width']*params['height']*1.5.
, as we are reading back a 4:2:0 frame
2- Try to concentrate on one stream at the time, using direct Bash and see if you are able to pass the 60 [s] threshold. If as you gradually increase the number of streams, and you don't see a similar issue, when running under Bash, then it may be the Threading module is lagging. If that is the case, consider using multiprocessing module instead.
Cheers,
Thank you @NastoohX!
Hi,
I got this error:
cmd:
ffmpeg -loglevel debug -xlnx_hwdev 1 -y -vsync 0 -rtsp_transport tcp -vcodec mpsoc_vcu_h264 -i rtsp://35.18.1.218:8081/stream_1080p_1 -vf xvbm_convert,fps=5 -f rawvideo -pix_fmt rgb24 -
SDP from RTSP:
Ubuntu 20.04 AWS EC2 vt1.3xlarge AMI aws-marketplace/video_sdk_v2.0_rc8_ubuntu2004_ami_05052022-22c0f2b0-021c-4ee6-98ff-9dacbc14fcf0
Regards, Kevin