If I check my motion.log, I find the following hint:
netcam_rtsp_ntc: ******************************************************
netcam_rtsp_ntc: The image sent is being
netcam_rtsp_ntc: trancoded to YUV420P. If possible change netcam
netcam_rtsp_ntc: picture format to YUV420P to possibly lower CPU usage.
netcam_rtsp_ntc: ******************************************************
And I notice a small CPU utilization increase, which makes me think the format transformation is done on CPU, before being sent to GPU.
However, when I check the actual pixel format of the stream it is YUV420P:
$ ffprobe -hide_banner rtsp://mycamera.lan:12345/stream
[rtsp @ 0x55e2cb8ad700] max delay reached. need to consume packet
[rtsp @ 0x55e2cb8ad700] RTP: missed 1 packets
Input #0, rtsp, from 'rtsp://mycamera.lan:12345/stream':
Metadata:
title :
Duration: N/A, start: 0.162811, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p(progressive), 1920x1080, 30 fps, 30 tbr, 90k tbn
Is this somehow user error or is there something wrong with the detection of the used pixel format?
I see there is an input_format option for netcam_params, but from the documentation it does not seem related.
I'm using following lines in my camera.conf to use hardware decoding (VA-API):
If I check my motion.log, I find the following hint:
And I notice a small CPU utilization increase, which makes me think the format transformation is done on CPU, before being sent to GPU.
However, when I check the actual pixel format of the stream it is YUV420P:
Is this somehow user error or is there something wrong with the detection of the used pixel format?
I see there is an
input_format
option fornetcam_params
, but from the documentation it does not seem related.