Closed ravenblackx closed 9 months ago
After looking at the code a bit, realizing browser may be a factor, so: Chrome. (it looks like connectionstatechange is how this is expected to be handled, but it doesn't happen)
After a bit more patience, it seems connectionstatechange does eventually happen (about 30 seconds after stop), but that still doesn't deactivate the camera when there are no readers.
Verbose logging looks like it's possible to detect a stop without the 30s delay (this bit happens immediately when you click stop)
DEBUG:aiortc.rtcdtlstransport:RTCDtlsTransport(client) - DTLS shutdown by remote party
DEBUG:aiortc.rtcdtlstransport:RTCDtlsTransport(client) - State.CONNECTED -> State.CLOSED
DEBUG:aiortc.rtcrtpsender:RTCRtpSender(video) > RtpPacket(seq=11132, ts=2056298810, marker=0, payload=96, 1300 bytes)
DEBUG:aiortc.contrib.media:MediaRelay(2110969743248) Stop proxy 2110931352832
DEBUG:aiortc.rtcrtpsender:RTCRtpSender(video) - RTP finished
On the "stopping at all" front, I was hoping that adding video.stop()
in the failed-handler would turn off the camera, but it seems not.
My modification:
# open media source
audio, video = create_local_tracks(
args.play_from, decode=not args.play_without_decoding
)
@pc.on("connectionstatechange")
async def on_connectionstatechange():
print("Connection state is %s" % pc.connectionState)
if pc.connectionState == "failed":
await pc.close()
pcs.discard(pc)
logging.debug("peer connection closed")
if video:
video.stop()
logging.debug("video stream stopped")
if audio:
audio.stop()
logging.debug("audio stream stopped")
Logs video stream stopped
(after the 30s delay), but I expected RelayStreamTrack.stop to go into MediaRelay._stop and unregister the proxy, but Stop proxy
is not in the log.
(Create proxy ... for source ...
, Start proxy ...
and Starting worker thread
are all in the log.)
Oh, haha, Stop proxy
is in the log before the 30s delay, I even quoted it.
So it looks like the missing part for shutting down is that nothing ever stops the PlayerStreamTrack, only the RelayStreamTrack. I'm guessing this is intentional with the assumption that leaving the webcam on is fine, so that's why it's not in the example.
A "lazy shutdown" fix (by which I mean, the webcam instance stays active for 30s and can be reused by another connection, but shuts down 30s after the last connection closes) is this:
diff --git a/webcam_original.py b/webcam.py
index c74334f..645069a 100644
--- a/webcam_original.py
+++ b/webcam.py
@@ -16,6 +16,7 @@ ROOT = os.path.dirname(__file__)
relay = None
webcam = None
+active_connections = 0
def create_local_tracks(play_from, decode):
@@ -61,6 +63,7 @@ async def javascript(request):
async def offer(request):
+ global active_connections
params = await request.json()
offer = RTCSessionDescription(sdp=params["sdp"], type=params["type"])
@@ -71,13 +74,19 @@ async def offer(request):
async def on_connectionstatechange():
print("Connection state is %s" % pc.connectionState)
if pc.connectionState == "failed":
+ global active_connections
await pc.close()
pcs.discard(pc)
+ active_connections -= 1
+ if not active_connections:
+ global relay, webcam
+ webcam.video.stop()
+ relay = None
+ webcam = None
# open media source
audio, video = create_local_tracks(
args.play_from, decode=not args.play_without_decoding
)
+ active_connections += 1
if audio:
audio_sender = pc.addTrack(audio)
This appears to be related to #424 and #359, where someone was asking about catching the immediate event, and the eventual answer was you can catch the connectionstatechange event, but the logs show that (in Windows at least) that event happens 30 seconds later than
DEBUG:aiortc.rtcdtlstransport:RTCDtlsTransport(client) - State.CONNECTED -> State.CLOSED
And maybe this comment is the answer.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
What you pointed out about the connection state change occurring about 30s later corresponds to the loss of "consent" to send data: we run a periodic check using STUN packets, and when the remote peer stops replying we know it has gone away. I'm not sure whether there is a faster way of being notified that the remote peer is gone.
Could you check whether anything actually gets sent by the browser when the peer connection is "closed"? I'm thinking along the lines of a DTLS shutdown or something similar.
I had a closer look at this, an we do receive a DTLS shutdown from the browser when a peer connection is closed. However I also re-read the specs, and RTCDtlsTransport
changing to the closed
state does not seem to contribute anything to the overall connection state:
https://w3c.github.io/webrtc-pc/#dom-rtcpeerconnectionstate
@fippo This make no sense to me, any idea why this is?
hrm... paging @pthatcher, I lost my coaster with the explanation for the rules from 2015 ;-)
I came across this issue https://github.com/pion/webrtc/issues/1767 which points to something else: while the DTLS moving to "closed" doesn't directly influence the connection state, it causes the peer connection to initiate a shutdown. I wonder whether browsers do the same, and what the rules are: e.g. "if all DTLS connections are closed, kill the peer connection".
I wonder whether browsers do the same, and what the rules are: e.g. "if all DTLS connections are closed, kill the peer connection".
Surprisingly not, the remote could still do a restart with a different fingerprint to re-initiate. Maybe. On a good day...
This is somewhat hard to observe because the cases where you can trigger it like doing pc2.close()
on this sample will also fail ICE. What it does show is that pc1.getSenders()[0].transport.state
goes to "closed" so this is detectable but indeed does not result in a change to the state of the peerconnection 🤔
This is on Windows.
Repro:
python webcam.py
(Also, if you refresh the browser and start the stream a couple of times it makes the python script increasingly unresponsive.)