Closed shiqifeng2000 closed 2 years ago
In examples, i.e., https://github.com/webrtc-rs/examples/blob/main/examples/play-from-disk-h264/play-from-disk-h264.rs, you can send done signal to close explicitly peerConnection when peer connection state becomes fail.
In examples, i.e., https://github.com/webrtc-rs/examples/blob/main/examples/play-from-disk-h264/play-from-disk-h264.rs, you can send done signal to close explicitly peerConnection when peer connection state becomes fail.
thank for the answer, but as your example state, sending done signal would lead to peer connection close /Code start/ tokio::select! { = donerx.recv() => { println!("received done signal!"); } = tokio::signal::ctrl_c() => { println!(""); } }; peerconnection.close().await?;** **/Code end/
And that will not fix the problem.
Current problem maybe candidate being unable to release, I guess
Here's the log for my assumption. When peer.on_ice_candidate invokes, there's 3 local candidates, but after client drops and close the connected, 2 remains, which maybe the reason handles/fd grows every time a peer is created.
/Log start/
[Rolling] 2022-03-30T15:16:25.524681+08:00 - INFO - vccplayer::player - Ice candidate Some(
RTCIceCandidate {
stats_id: "candidate:wE+2oBNHmWpxL+FPqrCU6GLhzQ1cQu5f",
foundation: "167090039",
priority: 2130706431,
address: "::",
protocol: Udp,
port: 49349,
typ: Host,
component: 1,
related_address: "",
related_port: 0,
tcp_type: "unspecified",
},
)
[Rolling] 2022-03-30T15:16:25.525033+08:00 - WARN - webrtc_ice::agent::agent_gather - [controlled]: failed to resolve stun host: 0.0.0.0:3478: io error: No available ipv6 IP address found!
[Rolling] 2022-03-30T15:16:25.525239+08:00 - INFO - vccplayer::player - Ice candidate Some(
RTCIceCandidate {
stats_id: "candidate:6iJ6dcYo2KzDi3kQaxiZyhFM7mpCdlBp",
foundation: "1528361898",
priority: 2130706431,
address: "192.168.0.100",
protocol: Udp,
port: 55336,
typ: Host,
component: 1,
related_address: "",
related_port: 0,
tcp_type: "unspecified",
},
)
[Rolling] 2022-03-30T15:16:25.526260+08:00 - WARN - webrtc_ice::agent::agent_internal - [controlled]: pingAllCandidates called with no candidate pairs. Connection is not possible yet.
[Rolling] 2022-03-30T15:16:25.526421+08:00 - INFO - vccplayer::player - Ice candidate Some(
RTCIceCandidate {
stats_id: "candidate:a1UMWevHL/eujAQZ+c0Qsp6gR2a2SJ3s",
foundation: "798456314",
priority: 1694498815,
address: "127.0.0.1",
protocol: Udp,
port: 61973,
typ: Srflx,
component: 1,
related_address: "0.0.0.0",
related_port: 61973,
tcp_type: "unspecified",
},
)
....... And here's the log for lsof -n|grep xxxx
vccplayer 26789 robin 51u IPv4 0xdf99672c5f26eab1 0t0 UDP 192.168.0.100:55336
vccplayer 26789 robin 52u IPv4 0xdf99672c5f26f3e1 0t0 UDP :61973
/*Log end/
Somehow the udp did fail to close even I manually trigger the RTCPeerConnection::close function, all the media engine, setting engine are of default configuration, and I get the same result even if I remove all the tracks, data channels etc. A UdpMux socket with fixed network interface address seems promising but still, fail to nominate the correct ip address from time to time. I am even considering building a UdpMux pool and drop the connection from within my code.
Today, I notice this comment in RTCPeerConnection in webrtc.rs repo, // Try closing everything and collect the errors // Shutdown strategy: // 1. All Conn close by closing their underlying Conn. // 2. A Mux stops this chain. It won't close the underlying // Conn if one of the endpoints is closed down. To // continue the chain the Mux has to be closed.
It's highly possible I was stuck in the second strategy. But I failed to target the muxer.
A little hint is appreciate to close the connection or the muxer
On Tue, Mar 29, 2022 at 3:15 PM Rusty Rain @.***> wrote:
In examples, i.e., https://github.com/webrtc-rs/examples/blob/main/examples/play-from-disk-h264/play-from-disk-h264.rs, you can send done signal to close explicitly peerConnection when peer connection state becomes fail.
— Reply to this email directly, view it on GitHub https://github.com/webrtc-rs/webrtc/issues/174#issuecomment-1081502098, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACH25WT7OIVSC3G226DALETVCKUZXANCNFSM5R2BINMQ . You are receiving this because you authored the thread.Message ID: @.***>
-- Best Regards
Yours, Robin Shi
I tried a customized EphemeralUDP with min port 8443 and max port 8447, when I created more than 4 peers, the local candidate will only nominate candidate with address "::" and port 8443. But client won't be able to pair the address anymore, complaining "webrtc_ice::agent::agent_internal - [controlled]: pingAllCandidates called with no candidate pairs. Connection is not possible yet."
I found the reason and will close this comment now The issue is because I did close the peer connection in a PeerConnection::on_peer_connection_state_change, when pc close it interrupt the chaining clearing up process and left the AgentInternal stuck in the loop in AgentInternal::start_on_connection_state_change_routine, that's why the UdpNetwork, no matter UdpMux or EphemeralUDP, was left behind in memory, which prevent the udp socket from being released.
It's mostly a misuse issue, but maybe the author should update the code to prevent further similar problem.
could you provide more details how to reproduce it and what should be improved to prevent such misuse page?
could you provide more details how to reproduce it and what should be improved to prevent such misuse page?
This is the code that cause the problem;
let pc = peer_connection.clone();
peer_connection
.on_peer_connection_state_change(Box::new(move |s: RTCPeerConnectionState| {
info!("Peer Connection State has changed: {}", s);
let is_failed = s == RTCPeerConnectionState::Failed;
let pc1 = pc.clone();
Box::pin(async move {
if is_failed {
pc1.close().await.ok();
}
})
}))
.await;
As for improving, I am not sure, maybe a Rwlock, or maybe a mpsc channel to control the closing chain? Hope it could help
@rainliu
Rain, looking at the snippet given above, and copied here below.
Is this considered a valid way to close the PC?
(As opposed to the example you linked to: https://github.com/webrtc-rs/examples/blob/main/examples/play-from-disk-h264/play-from-disk-h264.rs )
PS, thank you for creating webrtc.rs
// Valid? Not valid?
let pc = peer_connection.clone();
peer_connection
.on_peer_connection_state_change(Box::new(move |s: RTCPeerConnectionState| {
info!("Peer Connection State has changed: {}", s);
let is_failed = s == RTCPeerConnectionState::Failed;
let pc1 = pc.clone();
Box::pin(async move {
if is_failed {
pc1.close().await.ok();
}
})
}))
.await;
hi
I am using this repo for a streamer case. The streamer client is a web browser with webrtc api. It was perfect except some memory issue. So when I dig further, i notice something, the host opens a new udp port whenever a new peer connection is created but failed to get closed when the client dropped.
/*Log start/ [Rolling] 2022-03-28T16:57:37.824594+08:00 - WARN - webrtc_ice::agent::agent_internal - [controlled]: Failed to close candidate udp4 prflx 192.168.0.100:63937 related :0: the agent is closed [Rolling] 2022-03-28T16:57:37.824713+08:00 - INFO - webrtc_ice::agent::agent_internal - [controlled]: Setting new connection state: Failed [Rolling] 2022-03-28T16:57:37.825166+08:00 - INFO - webrtc::peer_connection - ICE connection state changed: failed [Rolling] 2022-03-28T16:57:37.825233+08:00 - INFO - vccplayer::player - Connection State has changed failed [Rolling] 2022-03-28T16:57:37.825279+08:00 - INFO - webrtc::peer_connection - peer connection state changed: failed [Rolling] 2022-03-28T16:57:37.825323+08:00 - INFO - vccplayer::player - Peer Connection State has changed: failed /*Log end/
I googled the Go version of Pion, something like https://github.com/pion/webrtc/issues/629, and try to close the peer. But i got no luck. Since I am not in a cluster mode, single port mode is not possible.
So what can I do to get rid of it, maybe add a timeout feature to the udp connections?