Describe the bug
I am using RTMP -> WebRTC and am playing the webRTC video-only stream in a frontend player. When the RTMP source is down and reconnects a few seconds later, the webRTC stream is automatically continued, but laggy/slowed-down and delayed for like a second.
Version
image: ossrs/srs:v6.0.113
To Reproduce
Steps to reproduce the behavior:
Enable rtmp_to_rtc on (or use rtmp2rtc.conf for SRS)
Start a GStreamer pipeline with rtmpsink like this one:
v4l2src device=/dev/video4 ! image/jpeg,width=1920,height=1080,framerate=30/1 ! decodebin ! queue ! x264enc speed-preset=ultrafast tune=zerolatency ! h264parse ! flvmux streamable=true ! rtmpsink location="rtmp://localhost:1935/live/livestream live=1 buffer=0"
Visit the demo player for SRS at (we use WHEP.js and video.js, but this bug also happens in your demo player):
http://localhost:8080/players/whep.html?vhost=__defaultVhost__&app=live&stream=livestream&server=localhost&port=8080&autostart=true&schema=http
Interrupt/ Restart the GStreamer pipeline
Now the stream continues, but gets laggy/ slows down/ has a delay of ~1 second
Expected behavior
There is no added lag or delay if the stream resources auto-reconnects. In this case the RTMP source for SRS to transcode to webRTC.
OR all rtc-play clients get closed/ notices that the publisher is not present anymore/ in a reconnecting state.
Additional context
As this delay/ lag gets added via WHEP and video.js as well as your own player, and the fact that a simple refresh of the frontend will fix the delay, I am certain that the problem is with SRS. Where is the delay/ buffer coming from?
Why aren't the clients informed that the webRTC stream is halting/ not sending any data?
I tried to delete all playing clients via the HTTP API, is this the only way?
Is there a config setting to force all clients to refresh/ re-connect if the initial stream source (RTMP) is re-connected?
Describe the bug I am using RTMP -> WebRTC and am playing the webRTC video-only stream in a frontend player. When the RTMP source is down and reconnects a few seconds later, the webRTC stream is automatically continued, but laggy/slowed-down and delayed for like a second.
Version image: ossrs/srs:v6.0.113
To Reproduce Steps to reproduce the behavior:
Enable rtmp_to_rtc on (or use rtmp2rtc.conf for SRS)
Start a GStreamer pipeline with rtmpsink like this one:
v4l2src device=/dev/video4 ! image/jpeg,width=1920,height=1080,framerate=30/1 ! decodebin ! queue ! x264enc speed-preset=ultrafast tune=zerolatency ! h264parse ! flvmux streamable=true ! rtmpsink location="rtmp://localhost:1935/live/livestream live=1 buffer=0"
Visit the demo player for SRS at (we use WHEP.js and video.js, but this bug also happens in your demo player):
http://localhost:8080/players/whep.html?vhost=__defaultVhost__&app=live&stream=livestream&server=localhost&port=8080&autostart=true&schema=http
Interrupt/ Restart the GStreamer pipeline
Now the stream continues, but gets laggy/ slows down/ has a delay of ~1 second
Expected behavior There is no added lag or delay if the stream resources auto-reconnects. In this case the RTMP source for SRS to transcode to webRTC. OR all rtc-play clients get closed/ notices that the publisher is not present anymore/ in a reconnecting state.
Additional context As this delay/ lag gets added via WHEP and video.js as well as your own player, and the fact that a simple refresh of the frontend will fix the delay, I am certain that the problem is with SRS. Where is the delay/ buffer coming from? Why aren't the clients informed that the webRTC stream is halting/ not sending any data?
I tried to delete all playing clients via the HTTP API, is this the only way? Is there a config setting to force all clients to refresh/ re-connect if the initial stream source (RTMP) is re-connected?