Closed guythnick closed 1 year ago
myFile.log Adding full log file of mew attempting to connect to the stream a few times just after the container was fully up.
Hi, thank you for the full logs. One question, is this the first time you have attempted to use ring-mqtt, or did it work in some previous version?
Also, another quick question, it appears that perhaps you do not have a subscription, is that true? It shouldn't technically matter, but I don't test ring-mqtt without subscriptions and I see some errors that appear to indicate there are no recordings available in the camera history. Just trying to figure out if I'm interpreting that correctly or if something else is going on.
This is the first time I am using your app, yes. And also no, I do not have a subscription.
Logs indicate that ring-mqtt is unable to establish the WebRTC session. Typical cause is due to network blocking RTP streams for whatever reason. Unfortunates ring-mqtt has limited visbility into what is happening behind the scenes, it just gets notified that the WebRTC session ended (see message: "Live stream WebRTC session has disconnected". while normally you would expect to see "Live stream WebRTC session is connected" at that point.
You can get more insight into the WebRTC details by enabling debug for werift (run the container with DEBUG=ring-*,werift*
). If you run with the werift debugging enable and provide those logs it might give some clues, however, usually WebRTC failures are caused by networking issues.
However, I also noticed that you are running on unRAID. Based on another recently reported issue, I'm starting to suspect that unRAID is having some significant issues with Docker networking that makes in unsuitable to run ring-mqtt for streaming (I don't know what it is as I'm not an unRAID user). See https://github.com/tsightler/ring-mqtt/issues/653 for more details.
I just downgraded Unraid back to 6.11.5 which was released last November. No change in behavior for me. The one Floodlight Plus can play Live and the non-transcoded event stream in HA (WebRTC) ... but that's it. The rest are still MSE only.
I don't think Ring has changed anything, the streams look like they've always looked to me and my cameras are basically the exact same generation as yours, with the exception of the single newer camera. Also, all of your cameras streamed fine for me and the connection between Ring and ring-mqtt is the same no matter what streaming protocol is used on the frontend.
Looking at the unRAID forums, there's quite a bit of Docker networking related issues reported for all kinds of things in the bug forums.
You even stated that everything worked when you put go2rtc on another machine, so how could that be Ring related?
Admittedly, the issue this user is having is very different vs what you were experiencing, so it might not be related at all, but it's hard to ignore that suddenly there are two reports of issues running on unRAID but no similar reports on other configs.
Yeah, I'm just trying to follow the old rule, look for what changed between working and non-working. At this point I have everything in the "old" state and the behavior STILL hasn't changed. (ring-mqtt 5.2.0 and go2rtc 1.2.0, on an older Unraid build)
It's one hell of a mystery!!!
Is it possible there are environmental changes, for example, perhaps enabling IPv6 by the ISP (or internally) or perhaps the client versions (Chrome, and really all browsers based on Webkit, would have seen mulitple updates in that time). Or perhaps upgrading the unRAID version changed some settings that didn't revert on downgrade (firewall, or network settings, I saw stuff about macvlan vs ipvlan in the forums, although I have no idea what that even is).
If I had a spare piece of hardware I'd install unRAID myself, but I'm not sure it would be as useful to put it on a VM because you always get weird networking issues with nested virtualization.
Yeah, very good points. Another thing I thought of is my networking gear, I have all Ubiquiti equipment. UDM Base, and a couple Nano HD access points. They have updates too so they could have broken something internally in the way routing happens.
The one odd part of the mystery is why RTSPtoWEB works on all the cameras, but not go2rtc. The only bummer about it is no audio.
Yeah, I also have mostly Unifi networking gear, all my wifi is Unifi, but my main switch is a Netgear business grade switch, and my lab instance is connected to an enterprise grade switch with a pretty complex VLAN setup running VMware and the instance lives on an isolated network. Production HA instance is on isolated VLAN in an IoT network routed via a firewall and running on a Proxmox host.
Point being, I have a pretty complex network with a lot of gear and it all just works everywhere. We are not talking about complex stuff here from a networking layer perspective, at least assuming ports are not blocked.
But even if they are, go2rtc will use TCP on the 8555 port for the RTP streams. It's just so, so strange, I'm mostly out of ideas.
Super weird question, if you use the Ring web dashboard to view the streams, does it work? I assume it has to, but it would be interesting to see the SDP offer and answer to the Ring endpoint, which you can get in the browser developer tools in the network tab. Just look for a POST to https://account.ring.com/api/cgw/integrations/v1/liveview/start and it will have a payload and response SDP.
Is this what you're looking for? This is the "Backyard" camera, one of the ones that doesn't work with go2rtc, it does work in the ring.com interface. (they REALLY need to fix the auth ... "are you sure, are you REALLY sure, are you SURE you're sure?"
Request:
{"session_id":"30ab199e-664d-48fa-a89d-a1d0023bf652","riid":"6baddccd13c7f4d3c7379b7b8dd338bd","device_id":5260689,"sdp":"v=0\r\no=- 7271424633916305206 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE 0 1\r\na=extmap-allow-mixed\r\na=msid-semantic: WMS\r\nm=audio 61111 UDP/TLS/RTP/SAVPF 111 63 9 0 8 13 110 126\r\nc=IN IP4 192.168.1.25\r\na=rtcp:9 IN IP4 0.0.0.0\r\na=candidate:2081344135 1 udp 2122260223 192.168.1.25 61111 typ host generation 0 network-id 1 network-cost 10\r\na=candidate:2191793683 1 tcp 1518280447 192.168.1.25 9 typ host tcptype active generation 0 network-id 1 network-cost 10\r\na=ice-ufrag:zEKE\r\na=ice-pwd:7ZqH3yLT3JlmzXuAmHptsln1\r\na=ice-options:trickle\r\na=fingerprint:sha-256 88:BC:B3:81:DD:48:02:A4:C5:8F:63:65:85:50:26:A2:DC:D3:5D:C0:4C:CE:ED:33:B2:04:FE:1A:7D:45:7F:B7\r\na=setup:actpass\r\na=mid:0\r\na=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level\r\na=extmap:2 http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time\r\na=extmap:3 http://www.ietf.org/id/draft-holmer-rmcat-transport-wide-cc-extensions-01\r\na=extmap:4 urn:ietf:params:rtp-hdrext:sdes:mid\r\na=sendrecv\r\na=msid:- 45fd8ac7-3e73-469c-bd80-f32729ce4543\r\na=rtcp-mux\r\na=rtpmap:111 opus/48000/2\r\na=rtcp-fb:111 transport-cc\r\na=fmtp:111 minptime=10;useinbandfec=1\r\na=rtpmap:63 red/48000/2\r\na=fmtp:63 111/111\r\na=rtpmap:9 G722/8000\r\na=rtpmap:0 PCMU/8000\r\na=rtpmap:8 PCMA/8000\r\na=rtpmap:13 CN/8000\r\na=rtpmap:110 telephone-event/48000\r\na=rtpmap:126 telephone-event/8000\r\na=ssrc:3469230405 cname:8brDt+G5ilBuQasB\r\na=ssrc:3469230405 msid:- 45fd8ac7-3e73-469c-bd80-f32729ce4543\r\nm=video 63638 UDP/TLS/RTP/SAVPF 96 97 98 99 100 101 35 36 37 38 102 103 104 105 106 107 108 109 127 125 39 40 41 42 43 44 45 46 47 48 112 113 114 115 116 117 118 49\r\nc=IN IP4 192.168.1.25\r\na=rtcp:9 IN IP4 0.0.0.0\r\na=candidate:2081344135 1 udp 2122260223 192.168.1.25 63638 typ host generation 0 network-id 1 network-cost 10\r\na=candidate:2191793683 1 tcp 1518280447 192.168.1.25 9 typ host tcptype active generation 0 network-id 1 network-cost 10\r\na=ice-ufrag:zEKE\r\na=ice-pwd:7ZqH3yLT3JlmzXuAmHptsln1\r\na=ice-options:trickle\r\na=fingerprint:sha-256 88:BC:B3:81:DD:48:02:A4:C5:8F:63:65:85:50:26:A2:DC:D3:5D:C0:4C:CE:ED:33:B2:04:FE:1A:7D:45:7F:B7\r\na=setup:actpass\r\na=mid:1\r\na=extmap:14 urn:ietf:params:rtp-hdrext:toffset\r\na=extmap:2 http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time\r\na=extmap:13 urn:3gpp:video-orientation\r\na=extmap:3 http://www.ietf.org/id/draft-holmer-rmcat-transport-wide-cc-extensions-01\r\na=extmap:5 http://www.webrtc.org/experiments/rtp-hdrext/playout-delay\r\na=extmap:6 http://www.webrtc.org/experiments/rtp-hdrext/video-content-type\r\na=extmap:7 http://www.webrtc.org/experiments/rtp-hdrext/video-timing\r\na=extmap:8 http://www.webrtc.org/experiments/rtp-hdrext/color-space\r\na=extmap:4 urn:ietf:params:rtp-hdrext:sdes:mid\r\na=extmap:10 urn:ietf:params:rtp-hdrext:sdes:rtp-stream-id\r\na=extmap:11 urn:ietf:params:rtp-hdrext:sdes:repaired-rtp-stream-id\r\na=recvonly\r\na=rtcp-mux\r\na=rtcp-rsize\r\na=rtpmap:96 VP8/90000\r\na=rtcp-fb:96 goog-remb\r\na=rtcp-fb:96 transport-cc\r\na=rtcp-fb:96 ccm fir\r\na=rtcp-fb:96 nack\r\na=rtcp-fb:96 nack pli\r\na=rtpmap:97 rtx/90000\r\na=fmtp:97 apt=96\r\na=rtpmap:98 VP9/90000\r\na=rtcp-fb:98 goog-remb\r\na=rtcp-fb:98 transport-cc\r\na=rtcp-fb:98 ccm fir\r\na=rtcp-fb:98 nack\r\na=rtcp-fb:98 nack pli\r\na=fmtp:98 profile-id=0\r\na=rtpmap:99 rtx/90000\r\na=fmtp:99 apt=98\r\na=rtpmap:100 VP9/90000\r\na=rtcp-fb:100 goog-remb\r\na=rtcp-fb:100 transport-cc\r\na=rtcp-fb:100 ccm fir\r\na=rtcp-fb:100 nack\r\na=rtcp-fb:100 nack pli\r\na=fmtp:100 profile-id=2\r\na=rtpmap:101 rtx/90000\r\na=fmtp:101 apt=100\r\na=rtpmap:35 VP9/90000\r\na=rtcp-fb:35 goog-remb\r\na=rtcp-fb:35 transport-cc\r\na=rtcp-fb:35 ccm fir\r\na=rtcp-fb:35 nack\r\na=rtcp-fb:35 nack pli\r\na=fmtp:35 profile-id=1\r\na=rtpmap:36 rtx/90000\r\na=fmtp:36 apt=35\r\na=rtpmap:37 VP9/90000\r\na=rtcp-fb:37 goog-remb\r\na=rtcp-fb:37 transport-cc\r\na=rtcp-fb:37 ccm fir\r\na=rtcp-fb:37 nack\r\na=rtcp-fb:37 nack pli\r\na=fmtp:37 profile-id=3\r\na=rtpmap:38 rtx/90000\r\na=fmtp:38 apt=37\r\na=rtpmap:102 H264/90000\r\na=rtcp-fb:102 goog-remb\r\na=rtcp-fb:102 transport-cc\r\na=rtcp-fb:102 ccm fir\r\na=rtcp-fb:102 nack\r\na=rtcp-fb:102 nack pli\r\na=fmtp:102 level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=42001f\r\na=rtpmap:103 rtx/90000\r\na=fmtp:103 apt=102\r\na=rtpmap:104 H264/90000\r\na=rtcp-fb:104 goog-remb\r\na=rtcp-fb:104 transport-cc\r\na=rtcp-fb:104 ccm fir\r\na=rtcp-fb:104 nack\r\na=rtcp-fb:104 nack pli\r\na=fmtp:104 level-asymmetry-allowed=1;packetization-mode=0;profile-level-id=42001f\r\na=rtpmap:105 rtx/90000\r\na=fmtp:105 apt=104\r\na=rtpmap:106 H264/90000\r\na=rtcp-fb:106 goog-remb\r\na=rtcp-fb:106 transport-cc\r\na=rtcp-fb:106 ccm fir\r\na=rtcp-fb:106 nack\r\na=rtcp-fb:106 nack pli\r\na=fmtp:106 level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=42e01f\r\na=rtpmap:107 rtx/90000\r\na=fmtp:107 apt=106\r\na=rtpmap:108 H264/90000\r\na=rtcp-fb:108 goog-remb\r\na=rtcp-fb:108 transport-cc\r\na=rtcp-fb:108 ccm fir\r\na=rtcp-fb:108 nack\r\na=rtcp-fb:108 nack pli\r\na=fmtp:108 level-asymmetry-allowed=1;packetization-mode=0;profile-level-id=42e01f\r\na=rtpmap:109 rtx/90000\r\na=fmtp:109 apt=108\r\na=rtpmap:127 H264/90000\r\na=rtcp-fb:127 goog-remb\r\na=rtcp-fb:127 transport-cc\r\na=rtcp-fb:127 ccm fir\r\na=rtcp-fb:127 nack\r\na=rtcp-fb:127 nack pli\r\na=fmtp:127 level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=4d001f\r\na=rtpmap:125 rtx/90000\r\na=fmtp:125 apt=127\r\na=rtpmap:39 H264/90000\r\na=rtcp-fb:39 goog-remb\r\na=rtcp-fb:39 transport-cc\r\na=rtcp-fb:39 ccm fir\r\na=rtcp-fb:39 nack\r\na=rtcp-fb:39 nack pli\r\na=fmtp:39 level-asymmetry-allowed=1;packetization-mode=0;profile-level-id=4d001f\r\na=rtpmap:40 rtx/90000\r\na=fmtp:40 apt=39\r\na=rtpmap:41 H264/90000\r\na=rtcp-fb:41 goog-remb\r\na=rtcp-fb:41 transport-cc\r\na=rtcp-fb:41 ccm fir\r\na=rtcp-fb:41 nack\r\na=rtcp-fb:41 nack pli\r\na=fmtp:41 level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=f4001f\r\na=rtpmap:42 rtx/90000\r\na=fmtp:42 apt=41\r\na=rtpmap:43 H264/90000\r\na=rtcp-fb:43 goog-remb\r\na=rtcp-fb:43 transport-cc\r\na=rtcp-fb:43 ccm fir\r\na=rtcp-fb:43 nack\r\na=rtcp-fb:43 nack pli\r\na=fmtp:43 level-asymmetry-allowed=1;packetization-mode=0;profile-level-id=f4001f\r\na=rtpmap:44 rtx/90000\r\na=fmtp:44 apt=43\r\na=rtpmap:45 AV1/90000\r\na=rtcp-fb:45 goog-remb\r\na=rtcp-fb:45 transport-cc\r\na=rtcp-fb:45 ccm fir\r\na=rtcp-fb:45 nack\r\na=rtcp-fb:45 nack pli\r\na=rtpmap:46 rtx/90000\r\na=fmtp:46 apt=45\r\na=rtpmap:47 AV1/90000\r\na=rtcp-fb:47 goog-remb\r\na=rtcp-fb:47 transport-cc\r\na=rtcp-fb:47 ccm fir\r\na=rtcp-fb:47 nack\r\na=rtcp-fb:47 nack pli\r\na=fmtp:47 profile=1\r\na=rtpmap:48 rtx/90000\r\na=fmtp:48 apt=47\r\na=rtpmap:112 H264/90000\r\na=rtcp-fb:112 goog-remb\r\na=rtcp-fb:112 transport-cc\r\na=rtcp-fb:112 ccm fir\r\na=rtcp-fb:112 nack\r\na=rtcp-fb:112 nack pli\r\na=fmtp:112 level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=64001f\r\na=rtpmap:113 rtx/90000\r\na=fmtp:113 apt=112\r\na=rtpmap:114 H264/90000\r\na=rtcp-fb:114 goog-remb\r\na=rtcp-fb:114 transport-cc\r\na=rtcp-fb:114 ccm fir\r\na=rtcp-fb:114 nack\r\na=rtcp-fb:114 nack pli\r\na=fmtp:114 level-asymmetry-allowed=1;packetization-mode=0;profile-level-id=64001f\r\na=rtpmap:115 rtx/90000\r\na=fmtp:115 apt=114\r\na=rtpmap:116 red/90000\r\na=rtpmap:117 rtx/90000\r\na=fmtp:117 apt=116\r\na=rtpmap:118 ulpfec/90000\r\na=rtpmap:49 flexfec-03/90000\r\na=rtcp-fb:49 goog-remb\r\na=rtcp-fb:49 transport-cc\r\na=fmtp:49 repair-window=10000000\r\n","protocol":"webrtc"}
Response:
{
"sdp": "v=0\r\no=- 7271424633916305206 2 IN IP4 0.0.0.0\r\ns=rmsbe7e8c6d\r\nt=0 0\r\nm=audio 9 UDP/TLS/RTP/SAVPF 111\r\nc=IN IP4 0.0.0.0\r\na=ice-ufrag:wyzl9XbJqLMKzMg1qgFnt7HCmD4XMDg0\r\na=ice-pwd:3LkZ8CKwbbeNfiS98g82wpOeJ9gGi/tI\r\na=candidate:1 1 UDP 2013266431 2600:1f18:17b4:b200:38a3:3187:39ce:ddb6 48946 typ host\r\na=candidate:3 1 TCP 1010828799 2600:1f18:17b4:b200:38a3:3187:39ce:ddb6 443 typ host tcptype passive\r\na=candidate:4 1 UDP 2013266430 2600:1f18:17b4:b200:38a3:3187:39ce:ddb6 36991 typ host\r\na=candidate:7 1 UDP 2013266429 52.202.155.7 38340 typ host\r\na=candidate:9 1 TCP 1010828031 52.202.155.7 443 typ host tcptype passive\r\na=mid:0\r\na=rtcp-mux\r\na=setup:active\r\na=rtpmap:111 OPUS/48000/2\r\na=rtcp-fb:111 transport-cc\r\na=fmtp:111 minptime=10;useinbandfec=1;sprop-stereo=0\r\na=ssrc:783809286 msid:user2220377141@host-5febb630 webrtctransceiver5599\r\na=ssrc:783809286 cname:user2220377141@host-5febb630\r\na=sendrecv\r\na=fingerprint:sha-256 E1:EE:CD:A5:7D:CB:29:55:CF:C7:28:11:8C:2C:5B:3A:CF:0C:79:23:31:13:E9:02:48:1B:2B:B1:3E:87:F0:C7\r\nm=video 9 UDP/TLS/RTP/SAVPF 112 113\r\nc=IN IP4 0.0.0.0\r\na=ice-ufrag:wyzl9XbJqLMKzMg1qgFnt7HCmD4XMDg1\r\na=ice-pwd:3LkZ8CKwbbeNfiS98g82wpOeJ9gGi/tI\r\na=candidate:1 1 UDP 2013266431 2600:1f18:17b4:b200:38a3:3187:39ce:ddb6 54278 typ host\r\na=candidate:3 1 TCP 1010828799 2600:1f18:17b4:b200:38a3:3187:39ce:ddb6 443 typ host tcptype passive\r\na=candidate:4 1 UDP 2013266430 2600:1f18:17b4:b200:38a3:3187:39ce:ddb6 52944 typ host\r\na=candidate:7 1 UDP 2013266429 52.202.155.7 27811 typ host\r\na=candidate:9 1 TCP 1010828031 52.202.155.7 443 typ host tcptype passive\r\na=mid:1\r\na=rtcp-mux\r\na=setup:active\r\na=rtpmap:112 H264/90000\r\na=rtcp-fb:112 nack\r\na=rtcp-fb:112 nack pli\r\na=rtcp-fb:112 ccm fir\r\na=rtcp-fb:112 transport-cc\r\na=fmtp:112 level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=64001f\r\na=rtpmap:113 rtx/90000\r\na=fmtp:113 apt=112\r\na=ssrc-group:FID 4214005731 119666684\r\na=ssrc:4214005731 msid:user2220377141@host-5febb630 webrtctransceiver5600\r\na=ssrc:4214005731 cname:user2220377141@host-5febb630\r\na=ssrc:119666684 msid:user2220377141@host-5febb630 webrtctransceiver5600\r\na=ssrc:119666684 cname:user2220377141@host-5febb630\r\na=sendonly\r\na=fingerprint:sha-256 E1:EE:CD:A5:7D:CB:29:55:CF:C7:28:11:8C:2C:5B:3A:CF:0C:79:23:31:13:E9:02:48:1B:2B:B1:3E:87:F0:C7\r\n"
}
OK, that's what I see for my cameras as well. What is interesting is that the profile in the request is:
profile-level-id=4d001f
Which is Main Profile Level 3.1 while in the response it is:
profile-level-id=64001f
High Profile Level 3.1
No earth shattering information here, but a little bit interesting.
Yeah, I keep coming back to the profile difference... the only working camera negotiates at a different level than the 3 that don't. (but only for me, for you, you got 4.1 High like the rest of mine). This issue is SO bizarre.
I did some experimenting and still having the same result. I have changed the networking mode in docker to Host, br0 (which is a custom network that gives containers their own IP), and it is the same result. I run an instance of docker-wyze-bridge which serves a similar function and this one works fine.
For ring-mqtt I initially was using 8554 container port, 8448 host, but have also tried just allocating 8554 for both and same result.
Is there anything out of the ordinary for this warning that shows up in the logs often?
2023-06-29T13:44:43.689Z ring-mqtt WARNING - Unhandled Promise Rejection
2023-06-29T13:44:43.689Z ring-mqtt TypeError: Cannot read properties of undefined (reading 'url')
@guythnick Yes, the problem in your case does not appear to be with the frontend, it is the backend WebRTC connection so changing ports and such is unlikely to solve your issue as that is only for RTSP. This is why I asked you to run the container with DEBUG=ring-*,werift*
and get the full logs from WERIFT as hopefully that will provide some insights into why the WebRTC connection to Ring is failing. You can see it in your logs here:
2023-06-28T18:23:56.982Z ring-rtsp [go2rtc] DBG [rtsp] new consumer stream=90486c0ef167_live
2023-06-28T18:23:56.982Z ring-rtsp [go2rtc] DBG [exec] run url="exec:/app/ring-mqtt/scripts/start-stream.sh 90486c0ef167 live ring/cf988c80-4c29-472c-8b29-ae5424d5c633/camera/90486c0ef167 {output}"
2023-06-28T18:23:56.996Z ring-rtsp [Front Door] Sending command to activate live stream ON-DEMAND
2023-06-28T18:23:56.998Z ring-mqtt [Front Door] Received set live stream state ON-DEMAND rtsp://127.0.0.1:8554/fb3e5ae3254b0ac6f8f6fd8d6383e39d
2023-06-28T18:23:56.998Z ring-mqtt [Front Door] ring/cf988c80-4c29-472c-8b29-ae5424d5c633/camera/90486c0ef167/stream/state ON
2023-06-28T18:23:56.998Z ring-attr [Front Door] ring/cf988c80-4c29-472c-8b29-ae5424d5c633/camera/90486c0ef167/stream/attributes {"status":"activating"}
2023-06-28T18:23:56.999Z ring-mqtt [Front Door] Initializing a live stream session for Ring cloud
2023-06-28T18:23:57.074Z ring-rtsp [Front Door] State indicates live stream is activating
2023-06-28T18:23:57.308Z ring-mqtt [Front Door] Live stream session successfully initialized, starting worker
2023-06-28T18:23:57.309Z ring-wrtc [Front Door] Live stream WebRTC worker received start command
2023-06-28T18:23:57.338Z ring-wrtc [Front Door] Live stream transcoding process is starting
2023-06-28T18:23:57.711Z ring-wrtc [Front Door] Websocket signaling for Ring cloud connected successfully
2023-06-28T18:23:58.073Z ring-wrtc [Front Door] Live stream transcoding process has started
2023-06-28T18:24:18.182Z ring-wrtc [Front Door] Live stream WebRTC session has disconnected
2023-06-28T18:24:18.182Z ring-mqtt [Front Door] ring/cf988c80-4c29-472c-8b29-ae5424d5c633/camera/90486c0ef167/stream/state OFF
2023-06-28T18:24:18.183Z ring-attr [Front Door] ring/cf988c80-4c29-472c-8b29-ae5424d5c633/camera/90486c0ef167/stream/attributes {"status":"inactive"}
2023-06-28T18:24:18.221Z ring-rtsp [Front Door] State indicates live stream has gone inactive
2023-06-28T18:24:18.223Z ring-rtsp [go2rtc] DBG [exec] run url="exec:/app/ring-mqtt/scripts/start-stream.sh 90486c0ef167 live ring/cf988c80-4c29-472c-8b29-ae5424d5c633/camera/90486c0ef167 {output}"
2
And, really, it's these two lines that are the key:
2023-06-28T18:23:58.073Z ring-wrtc [Front Door] Live stream transcoding process has started
2023-06-28T18:24:18.182Z ring-wrtc [Front Door] Live stream WebRTC session has disconnected
Everything up to that point looks perfect, the WebRTC connection is prepared, the control channel with Ring API is established, the first message indicates that we are starting ffmpeg and asking WebRTC to send us data, which starts the peer connection process where WebRTC establishes the RTP streams with Ring. Normally, you would expect to see, in about a second or so, the message "Live stream WebRTC session is connected" which would indicate that WebRTC was able to successfully establish the peer connection. However, in your case there's about 20 seconds, almost exactly, and then the disconnect message, which indicates it is probably a timeout, almost certainly indicating that the peer Connection process failed to establish the connection.
The usual cause is failure to configure UDP sessions (I don't think werift supports TCP for the WebRTC streams yet), but the debug logs would (hopefully) provide more insight. Perhaps an overly aggressive firewall that blocks UDP or some other issue like an ISP using double NAT.
Regarding the "url" messages, this is why I asked if you had a subscription. Those messages should only happen if you don't have a subscription, which you indicated you did not, and it's actually just a bug that I need to fix to suppress the polling, but it's not related to the streaming issue at all.
Ah, sorry I missed that. Just added that debug and tried to initiate the stream a couple of times.
OK, so that log is interesting. It will take me some time to analyze more deeply, but, as a cursory glance, it appears to show some pretty serious issues with UDP networking. Everything looks OK during the initial steps, ICE handles the peer negotiation and connections are established. However, once DTLS negotiation starts, things take a bad turn. DTLS is the protocol used to negotiate encryption used for the RTP traffic send over UDP, it's effectively the UDP equivalent of TLS for TCP.
When comparing the DTLS negotiation to a working case, things seem OK at first, but then suddenly theres:
2023-06-29T14:15:47.623Z werift-dtls : packages/dtls/src/socket.ts : err onHandleHandshakes error Error
at generateKeyPair (/app/ring-mqtt/node_modules/werift/lib/dtls/src/cipher/namedCurve.js:52:19)
at /app/ring-mqtt/node_modules/werift/lib/dtls/src/flight/client/flight5.js:201:64
at Flight5.handleHandshake (/app/ring-mqtt/node_modules/werift/lib/dtls/src/flight/client/flight5.js:68:15)
at DtlsClient.value (/app/ring-mqtt/node_modules/werift/lib/dtls/src/client.js:54:47)
2023-06-29T14:15:47.624Z werift:packages/webrtc/src/transport/dtls.ts dtls failed Error
at generateKeyPair (/app/ring-mqtt/node_modules/werift/lib/dtls/src/cipher/namedCurve.js:52:19)
at /app/ring-mqtt/node_modules/werift/lib/dtls/src/flight/client/flight5.js:201:64
at Flight5.handleHandshake (/app/ring-mqtt/node_modules/werift/lib/dtls/src/flight/client/flight5.js:68:15)
at DtlsClient.value (/app/ring-mqtt/node_modules/werift/lib/dtls/src/client.js:54:47)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
This doesn't appear to be fatal, it looks like DTLS keeps trying for a while, but, eventually, it gets too many retransmits and fails hard:
2023-06-29T14:15:58.134Z werift-dtls : packages/dtls/src/flight/flight.ts : warn retransmit 5 5
2023-06-29T14:15:59.234Z werift-dtls : packages/dtls/src/flight/flight.ts : warn retransmit 10 5
2023-06-29T14:15:59.234Z werift-dtls : packages/dtls/src/flight/flight.ts : err retransmit failed 11
2023-06-29T14:15:59.234Z werift-dtls : packages/dtls/src/socket.ts : err onHandleHandshakes error Error: over retransmitCount : 5 7
at Flight5.transmit (/app/ring-mqtt/node_modules/werift/lib/dtls/src/flight/flight.js:85:19)
at async Flight5.exec (/app/ring-mqtt/node_modules/werift/lib/dtls/src/flight/client/flight5.js:88:9)
at async DtlsClient.value (/app/ring-mqtt/node_modules/werift/lib/dtls/src/client.js:67:33)
2023-06-29T14:15:59.234Z werift:packages/webrtc/src/transport/dtls.ts dtls failed Error: over retransmitCount : 5 7
at Flight5.transmit (/app/ring-mqtt/node_modules/werift/lib/dtls/src/flight/flight.js:85:19)
at async Flight5.exec (/app/ring-mqtt/node_modules/werift/lib/dtls/src/flight/client/flight5.js:88:9)
at async DtlsClient.value (/app/ring-mqtt/node_modules/werift/lib/dtls/src/client.js:67:33)
2023-06-29T14:15:59.240Z werift-dtls : packages/dtls/src/flight/flight.ts : warn retransmit 10 5
2023-06-29T14:15:59.240Z werift-dtls : packages/dtls/src/flight/flight.ts : err retransmit failed 11
2023-06-29T14:15:59.240Z werift-dtls : packages/dtls/src/socket.ts : err onHandleHandshakes error Error: over retransmitCount : 5 7
at Flight5.transmit (/app/ring-mqtt/node_modules/werift/lib/dtls/src/flight/flight.js:85:19)
at async Flight5.exec (/app/ring-mqtt/node_modules/werift/lib/dtls/src/flight/client/flight5.js:88:9)
at async DtlsClient.value (/app/ring-mqtt/node_modules/werift/lib/dtls/src/client.js:67:33)
2023-06-29T14:15:59.240Z werift:packages/webrtc/src/transport/dtls.ts dtls failed Error: over retransmitCount : 5 7
at Flight5.transmit (/app/ring-mqtt/node_modules/werift/lib/dtls/src/flight/flight.js:85:19)
at async Flight5.exec (/app/ring-mqtt/node_modules/werift/lib/dtls/src/flight/client/flight5.js:88:9)
at async DtlsClient.value (/app/ring-mqtt/node_modules/werift/lib/dtls/src/client.js:67:33)
Notice also that it's not just retries, but time, the time for the last log from the first is like 12 seconds, for something that should normally take <1 second.
The various handshakes in the log fail in subtly different ways each time, but always indicate network related issues, for example, one of the earlier handshakes fails with this message:
2023-06-29T14:15:26.283Z werift-dtls : packages/dtls/record/receive.ts : err ContentType.alert Alert { level: 2, description: 10 } UnexpectedMessage flight 5 lastFlight [
Finished {
verifyData: <Buffer e4 8d 2b 8f b6 44 fb f1 70 1a 0c e6>,
msgType: 20,
messageSeq: 2
}
]
So it received some fragment of data, but not at all what was expected, then just a bit later:
2023-06-29T14:15:26.293Z werift-dtls : packages/dtls/record/receive.ts : err ContentType.alert Alert { level: 2, description: 51 } DecryptError flight 5 lastFlight [
Finished {
verifyData: <Buffer 0f 9d 4f 7c 6e 45 ba b1 5b 5d c9 64>,
msgType: 20,
messageSeq: 4
}
]
So again, received something, but couldn't decrypt it. Later on, there appear to be cases where it gets the same packet twice, perhaps due to the retransmits finally coming through.
I don't really know what is going on here, but I also don't see how I can do anything about it, clearly there is something wrong in the network path, my suspicion is that it's something to do with unRAID since I have two people reporting different issues, but both seem networking related, but that is a total guess on my part at this point.
Could it be something to do with Ring, or the upstream network, well, maybe. As far as I can tell the Ring media servers are hosted in AWS, and, because AWS has lots of redundant network paths, I see packets take different routes between AWS and myself. For UDP this sometimes leads to out-of-order packets, which will cause slight artifacting in the videos played via ring-mqtt. This is because the retransmit handling in werift isn't quite a good as it is in most browsers, however, I've never seen cases where DTLS handshakes fail due to this issue. The problem is usually most apparent in the evenings, during periods of overall high Internet usage.
Quick question, would you be willing to share your camera with me to see if it works for me? I would only need it for a short time. I understand if it's too much of an intrusion, but just thought I would ask as it would act to doubly eliminate any code related issue. If so, you can send the invite to same username at Gmail and I will just test it quickly, but if not, no issues, just wanted to offer it.
I just sent you an invite if you want to test. Really appreciate how helpful you are with this.
I also tried in Unraid to map both the TCP and UDP ports for 8554:8558, before it was only set to TCP. But, getting the same result.
It could be that Unraid's docker isn't compatible with whatever Ring is trying to do from a networking stance.
WebRTC uses random ports, again, the 8554 stuff is only for RTSP, which is the local side, not the backend between ring-mqtt and Ring.
I wonder if that could be the issue then, Unraid not allowing docker to communicate the ports. One thing I notice is that on the docker-wyze-bridge container, there are many more ports that are opened up, one specifically for WebRTC.
I wonder if there is an extra parameter I can add to the container to allow it to use ports that the container tries to spawn internally. I will search around.
Just as an example, these are the ports that container uses:
So I don't know anything about the wyze container, but that looks like it is configuration for how a client can connect to the streaming server in the container, which is the opposite of the ring-mqtt case where you are seeing issues.
When using ring-mqtt the container is the client, attempting to stream from Ring media servers. There is no control over what ports Ring gives during the WebRTC peer connection process. Ring returns a pretty consistent list of IPs and Ports, a set of UDP ports for both IPv4 and IPv6 to the specifically allocated media server, and then a TCP port (on 443) witch is used as a fall back in the case that dynamic UDP ports are blocked. Unfortunately, in the case of WeRIFT, the library that ring-mqtt uses to be a WebRTC receiver (i.e. basically like a browser), it appears not to work with TCP ICE candidates, at least, I couldn't get it to work, so it's only UDP that can work.
However, if for example, ring-mqtt was offering up WebRTC locally to other clients, using go2rtc for example, then it woudl also require settings to expose specific WebRTC ports to those clients (by default it is already 8555, it's just not enabled). However, ring-mqtt doesn't currently serve up WebRTC, HLS or RTMP to clients, only RTSP streams, so it only needs the RTSP ports, which is why that is all that is configurable.
Also, to be clear, I don't think it's a problem with ports not being open. Based on the logs, WebRTC does make it through the peer connection process and the peer reaches "connected" state. At that point some traffic does flow over those ports, it just doesn't get very far before there starts being retries and socket errors, and what appears to be packet corruptions. If the ports were straight up blocked, the peer connection process would fail and never reach connected state.
You can remove my access to your camera, it works perfectly for me, the startup takes a little but longer vs my own cameras, but I'm guessing that's a combination of the fact that you have a battery camera, which are always a little slower to respond to requests, and the fact that it's directing me to Ring media servers in AWS us-west-2 (Oregon I believe), which is pretty far from me network wise, plus, based on your location, your camera isn't that close to Oregon either so we're probably talking about 150ms round trip from my setup to yours. But overall, it works perfectly fine, streams start in about 3.5 seconds.
Based on this, I'm going to have to say that there is something network related going on with unRAID that I can't do anything about at this time. I have no idea what it is though.
Ok, thanks for the thorough explanation. I will see if maybe the Unraid forums shows anything that could be related.
BTW, I obviously can't be anywhere near sure that this is an issue with unRAID. It could be something else, like werift not being able to handle out-of-order packets during negotiation as I saw quite a significant number of our-of-order packets from your camera streams, not in any huge quantity, but based on the ISP, it might be more prominent.
I don't know if you happen to have any other way to run ring-mqtt that's not on unRAID, but it would be interesting if that was at all possible just to help eliminate it. A docker image on Windows and Mac for example would help to confirm or potentially eliminate unRAID as the source of issues here.
Just an update from my side, I managed to scrounge an old laptop to install unRAID on and give it a test. Now, this is a pretty old machine (it has in i7-4700, so about 10 years old, which is about right, I think it was late 2013), but it had decent specs back in it's day. Has a 1Gb onboard NIC and 16GB of RAM, plus it had been upgraded with a 256GB SDD, so it was plenty of resources for a quick test of running a few containers on unRAID.
Configured the system to boot the latest version of unRAID from the USB stick, mounted and formatted the old SATA SSD, installed the mosquitto, ring-mqtt, and go2rtc just using host networing for a super simple test, pointed my development HA instance to use that go2rtc instead of the local one and configured a few cameras to use the ring-mqtt there as well.
Well, it works, no errors, rock solid video, fast startup times, etc. I just don't know what is going on here.
Could you guys share more details about your networking setup, what firewall you have, what ISP, what network card is in use. Is there anything at all unusual about the setup, for example an unusual MTU, a VPN in use, etc. Anything that you can think of that might be unusual.
Also, if there's anything in particular that you have installed on your unRAID, like a firewall plugin or something, that would be useful as well.
Very simple networking setup:
Nginx Proxy Manger (for SSL support for the outside) is the only firewall-ish thing although it's really not. Ubiquity Dream Machine (Base) for the main router. 2x Nano HD Access Points. The UDM does have Hairpin NAT enabled, not sure if that would cause an issue. (bad cameras still won't play webrtc when I'm trying to play using an internal IP)
ISP is Comcast aka Xfinity ... just shy of 1 Gbps down, about 20 Mbps up.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c325f99765f8 lscr.io/linuxserver/jackett "/init" 20 hours ago Up 20 hours 0.0.0.0:9117->9117/tcp, :::9117->9117/tcp jackett
c7c9b2a80ba2 tsightler/ring-mqtt:latest "/init" 27 hours ago Up 27 hours ring-mqtt
a1e86fe6cf97 ghcr.io/deepch/rtsptoweb:latest "./rtsp-to-web --con…" 28 hours ago Up 27 hours rtsptoweb
5e1d8f5a80df lscr.io/linuxserver/sonarr "/init" 28 hours ago Up 28 hours 0.0.0.0:8989->8989/tcp, :::8989->8989/tcp sonarr
43789c50a670 rogerfar/rdtclient:latest "/init" 28 hours ago Up 28 hours (healthy) 0.0.0.0:6500->6500/tcp, :::6500->6500/tcp rdt-client
c196aefd49b8 lscr.io/linuxserver/radarr "/init" 28 hours ago Up 28 hours 0.0.0.0:7878->7878/tcp, :::7878->7878/tcp radarr
e417ad1b4613 sctx/overseerr "/sbin/tini -- yarn …" 28 hours ago Up 28 hours 0.0.0.0:5055->5055/tcp, :::5055->5055/tcp Overseerr
a275efb17374 lscr.io/linuxserver/bazarr "/init" 28 hours ago Up 28 hours 0.0.0.0:6767->6767/tcp, :::6767->6767/tcp bazarr
bce916102492 p3terx/ariang:latest "/darkhttpd /AriaNg …" 28 hours ago Up 28 hours 0.0.0.0:6880->6880/tcp, :::6880->6880/tcp AriaNg
e6d322b317d9 homeassistant/home-assistant "/init" 28 hours ago Up 28 hours Home-Assistant
ea9c52d6baa5 binhex/arch-code-server "/usr/bin/dumb-init …" 28 hours ago Up 28 hours 0.0.0.0:8500->8500/tcp, :::8500->8500/tcp binhex-code-server
58381e9feabf mariadb "docker-entrypoint.s…" 28 hours ago Up 28 hours MariaDB-Official
0b340b37fdce ghcr.io/haveagitgat/tdarr "/init" 2 weeks ago Up 28 hours 0.0.0.0:8264-8266->8264-8266/tcp, :::8264-8266->8264-8266/tcp, 8267/tcp tdarr
bbb354d0a293 koenkk/zigbee2mqtt:latest "docker-entrypoint.s…" 3 weeks ago Up 28 hours 0.0.0.0:9442->9442/tcp, :::9442->9442/tcp zigbee2mqtt
1b710196bf12 pihole/pihole:latest "/s6-init" 4 weeks ago Up 28 hours (healthy) pihole
1d4632b7f4ca p3terx/aria2-pro "/init" 5 weeks ago Up 28 hours 0.0.0.0:6800->6800/tcp, :::6800->6800/tcp, 0.0.0.0:6888->6888/tcp, :::6888->6888/tcp, 0.0.0.0:6888->6888/udp, :::6888->6888/udp aria2-pro
466690542e54 portainer/portainer-ce "/portainer" 5 weeks ago Up 28 hours 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 9443/tcp, 0.0.0.0:9996->9000/tcp, :::9996->9000/tcp Portainer-CE
005217ad8404 codeproject/ai-server:latest "./CodeProject.AI.Se…" 6 weeks ago Up 28 hours 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp, 32168/tcp CodeProject.AI_Server
459c7668faa1 ich777/krusader "/opt/scripts/start.…" 2 months ago Up 28 hours 0.0.0.0:5901->5900/tcp, :::5901->5900/tcp, 0.0.0.0:8081->8080/tcp, :::8081->8080/tcp Krusader
7ade22cbf091 jlesage/nginx-proxy-manager:latest "/init" 2 months ago Up 28 hours 0.0.0.0:443->4443/tcp, :::443->4443/tcp, 0.0.0.0:80->8080/tcp, :::80->8080/tcp, 0.0.0.0:7818->8181/tcp, :::7818->8181/tcp NginxProxyManager
966b444744f7 olprog/unraid-docker-webui "./unraid-docker-web…" 3 months ago Up 28 hours 1111/tcp, 0.0.0.0:1111->8080/tcp, :::1111->8080/tcp Docker-WebUI
453fdc472a48 cmccambridge/mosquitto-unraid:latest "/docker-entrypoint.…" 5 months ago Up 28 hours 0.0.0.0:1883->1883/tcp, :::1883->1883/tcp, 0.0.0.0:49154->8883/tcp, :::49154->8883/tcp, 0.0.0.0:49153->9001/tcp, :::49153->9001/tcp mosquitto
9323b790521e housewrecker/gaps "/bin/sh -c ./start.…" 8 months ago Up 28 hours 0.0.0.0:8484->8484/tcp, :::8484->8484/tcp gaps
c56a30adaeda p3rco/openrgb:latest "/init" 8 months ago Up 28 hours 0.0.0.0:5900->5900/tcp, :::5900->5900/tcp, 0.0.0.0:6742->6742/tcp, :::6742->6742/tcp, 0.0.0.0:5804->5800/tcp, :::5804->5800/tcp P3R-OpenRGB
0d650d54792d plexinc/pms-docker:plexpass "/init" 8 months ago Up 28 hours (healthy) Plex-Media-Server
Let me know what else I can pull ...
I am kind of confused... Is go2rtc required? I thought that it was built into the container itself. I am only running ring-mqtt and mosquito. I will try to add go2rtc as well. Maybe I missed something in the documentation, but I assumed I could stream RTSP straight from the ring-mqtt container.
@guythnick No, the go2rtc addon/container isn't required unless you want to enable low-latency streaming to HA via WebRTC. If you just want to stream via RTSP (for example, to VLC or using native HA stream, etc) then it's not needed at all.
To be completely clear, your failure behavior is very different than that of @Scope666, but I was somewhat conflating them because you were both using unRAID and they both seemed somewhat like UDP packet related issues from a symptom perspective. However, at this point I'm not very convinced unRAID is the issue. Technically, I guess was never convinced, but was just out of ideas as there are not many options left since, in both of the cases, the very same cameras all work fine for me.
Ok, I think I might have had a breakthrough. My Unraid box has an always running Windows 11 VM for Blue Iris. I spun up go2rtc in there, webrtc still not working to my laptop for the 3 cameras... BUT ... I remote into the Windows 11 VM, and all the cameras play in its browser via WebRTC without issue.
The setup looks like this:
Unraid box --- TP Link smart switch ----long ethernet run---- Router ethernet port.
I'm now starting to suspect the TP Link switch might be blocking the traffic. Tomorrow I might try to move the Unraid server to a different spot in the house, to see if the behavior changes.
OK, it gets weirder... the TP Link switch has my old Asus Router plugged into it, I use it for an additional 5Ghz AP for that side of the house. I temp changed it's SSID so I could pin myself to it, and WebRTC works to my laptop! Now that's still going though the TP Link switch, but it's not going back to the main router over that long ethernet run at that point.
I just removed go2rtc from the Win11 VM, put it back in Unraid Docker, streams are still playing if I connect to the Asus's Wi-Fi instead of the Ubiquiti.
So at this point it's either a bug in the router or APs, or I have a setting wrong somewhere. WebRTC is definitely good at the server, and out the TP Link to the Asus AP, but it's dying on the way back to the router, or in the router itself.
Since you run Ubiquiti too, if you have any ideas for settings to look at, please let me know! I feel like we're getting close now...
EDIT: As a test I took the long ethernet run out of the TP Link and put it straight into the Unraid box, all cameras work with WebRTC. I then had the idea to go uplink -- Asus router switch port -- Unraid box. THAT seems to be working, so I think the culprit is the TP Link switch!!!
EDIT2: So that was it, all this time. That stupid TP Link switch, the timing makes sense too from when I started having problems. Now why the 3 cameras, but not the 4th, and only with go2rtc and not RTSPtoWEB, we'll never know.
@Scope666 Interesting findings and good sleuthing! I wonder if it's just due to overrunning the queue in the switch. During initial startup of the stream there's a period of time where Ring is sending data, but werift isn't forwarding the packets yet. This data builds in the UDP receiver buffer and, once the full stream gets going, those initial packets are sent in a very fast burst through the pipeline. The stream then settles down to a more typical rate. I wonder if the TP-link is just dropping a lot of those initial packets because its queue is too small (it must have some queue since it appears to have QoS). That could potentially explain why the single camera, which for some reason negotiates a slightly lower bitrate, somehow works, it could be just right on the edge.
Anyway, interesting, but at this point it seems not related to the issue @guythnick is having, so I'm going to copy your response with the resolution to your original issue and then hide the other comments about your issue from this thread as "resolved" to hopefully avoid confusion. I really appreciate your efforts and willingness to dig so deep into the issue and I'm glad you found a resolution for your specific case. It does remind us that networking gear can still be a source of unexpected issues!
Just checked, now that everything's working, the one Side camera still streams at 3.2 Main instead of 4.1 High like the rest, that's pretty odd. BTW, now that go2rtc is functional again, I can now use your "transcoded" event streams. They weren't working with RTSPtoWEB.
Well, transcoded events are pretty horrible. It's effectively the same as selecting to "download" a video in the Ring app. Ring transcodes the video to include timestamps from the event and their watermark, but the actual resulting file is pretty crappy (it's like the hard stich the pre-buffer together) and the video is not optimized for streaming at all and thus streams very poorly in most cases, likely because it's optimized for local playback since it's a "download" and it does work fine for this use case.
Unfortunately, it's so bad that I have to use ffmpeg to re-transcode with the ultrafast preset (because that's the only one that has a prayer of working on lower end hardware like RPis) to try to get it back into something marginally stream-able. But the tradeoff made by the ultrafast preset to get decent quality with fairly low CPU is to create a much larger file, which in turn produces a much higher bitrate than would normally be required for similar quality H.264 video.
I played around with a different approach, using Ring's API for "sharing" the file vs "downloading" the file. In this case the transcoded video seems far more optimized for streaming, which makes sense as that is the intended use case of that feature, and of course the file can still be downloaded in this case. In many ways this is even easier than the "download" method because the shared link is permanent while the download link expires every 180 seconds, but this also had the side effect that every single video ends up with a permanent link that can play it without any other authorization (basically, a magic link) and then the video stored on Ring servers, apparently, forever. Well, I don't actually know how long, but longer than the normal 60 days or whatever as I have a handful a videos of some deer roaming my front yard that I shared back in 2021 that are still there to this day. Ring might not like this if I did this for every video and, on top of that, having all user videos with a non-expiring link seems not exactly super-safe, so I stuck with the current approach for now.
I have considered the possibility of downloading the file locally, to /tmp or something, which should make it less of an issue since the local streamer (ffmpeg) would be able to analyze the entirety of the file vs try to stream it on-demand from AWS S3. Streaming the file from local storage seems to work just fine, but then you have to manage files, etc. Might happen one day.
So what you just said further supports your theory about the buffer not being able to hold / handle the burst. I bet RTSPtoWEB is at a lower bitrate (doesn't support audio) so that's probably why it worked. The one cam was probably just under the threshold to support live and non-transcoded events with go2rtc on the bad switch.
@guythnick Can you check to see if there's any indication of UDP packets being dropped at the OS level? You should be able to run something like "netstat -su" and get output like this:
IcmpMsg:
OutType3: 36
Udp:
28475 packets received
36 packets to unknown port received
0 packet receive errors
28475 packets sent
0 receive buffer errors
0 send buffer errors
IgnoredMulti: 114
UdpLite:
IpExt:
InBcastPkts: 114
InOctets: 708814872
OutOctets: 311029390
InBcastOctets: 9054
InNoECTPkts: 395383
Also, the output of "ip -s link" might be interesting if it shows any drops.
As this issue appears to have gone stale, I'm going to close it now. The debug logs appear to show clear issues receiving UDP data from Ring over the WebRTC, but I'm afraid I have little idea why, however, since the same camera works fine with my setup, I have to assume it's not a problem in the code but an environment specific issue. I wish you luck in tracking it down if you decide to dig further.
Yes, sorry. I kind of gave up. I did get the docker running on my main windows machine, and it had the same behavior. So, it is something related to my internal network. No worries though, I will possibly dig into it on my own later. Thanks for the help though, I am impressed how active you are with helping users.
On Fri, Jul 7, 2023 at 10:17 AM tsightler @.***> wrote:
As this issue appears to have gone stale, I'm going to close it now. The debug logs appear to show clear issues receiving UDP data from Ring over the WebRTC, but I'm afraid I have little idea why, however, since the same camera works fine with my setup, I have to assume it's not a problem in the code but an environment specific issue. I wish you luck in tracking it down if you decide to dig further.
— Reply to this email directly, view it on GitHub https://github.com/tsightler/ring-mqtt/issues/658#issuecomment-1625568290, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABILRWRJ5YR3KH65ZSRFLMTXPAR75ANCNFSM6AAAAAAZXPFK7A . You are receiving this because you were mentioned.Message ID: @.***>
Totally fair, let me know if you do pick it up again or have any other questions. I hate just leaving something not working, but there are limits to what I can control. The debug logs seem really clear that it's a network problem, and coupled with the fact that it worked for me, I have to assume that is the case, however, I have no real proof of this other than logs showing UDP packets not arriving.
One thing I really have started to wonder about, I personally see a lot of out-of-order packets from Ring media servers which I believe is caused but how the routing works for AWS since they have so many routes. During normal streaming this isn't such an issue because werift uses a jitter buffer to deal with out-of-order packets as they are somewhat expected, but perhaps, if this happens consistently during the handshake phase, it could cause this type of issue. It might be worth opening an issue on the werift project and posting your trace files there, but I didn't want to do that without your permission. Would you be OK if I did that?
Yeah that's totally fine.
On Fri, Jul 7, 2023, 10:32 AM tsightler @.***> wrote:
Totally fair, let me know if you do pick it up again or have any other questions. I hate just leaving something not working, but there are limits to what I can control. The debug logs seem really clear that it's a network problem, and coupled with the fact that it worked for me, I have to assume that is the case, however, I have no real proof of this other than logs showing UDP packets not arriving.
One thing I really have started to wonder about, I personally see a lot of out-of-order packets from Ring media servers which I believe is caused but how the routing works for AWS since they have so many routes. During normal streaming this isn't such an issue because werift uses a jitter buffer to deal with out-of-order packets as they are somewhat expected, but perhaps, if this happens consistently during the handshake phase, it could cause this type of issue. It might be worth opening an issue on the werift project and posting your trace files there, but I didn't want to do that without your permission. Would you be OK if I did that?
— Reply to this email directly, view it on GitHub https://github.com/tsightler/ring-mqtt/issues/658#issuecomment-1625588613, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABILRWWKQQEYX5IMUUWF7LLXPATZDANCNFSM6AAAAAAZXPFK7A . You are receiving this because you were mentioned.Message ID: @.***>
Describe the problem
I am able to get ring-mqtt 5.4.1 running via Docker. But everytime I try to run the RTSP stream, it fails in VLC. I am using the url 'rtsp://192.168.1.184:8558/90486c0ef167_live '. It is also not working when adding a generic camera in HA. Although, the snapshot does work, as well as all of the other entities added to HA via MQTT. When the stream fails, it looks like in the logs that it attempts to open the stream several times.
Describe your environment
Running via Docker on Unraid. Using a Ring Video Doorbell (2nd Gen) wired. Video feed is working in the app.
Describe any steps you've taken to attempt to resolve the problem
Tried adding username / password, with the same result.
Config:
Debug Logs