Open prathibhacdac opened 2 years ago
The delay in audio can be created by either slow routers, setting the audio buffer to long, or simply by there being too much latency over your internet connection.
It’s important to remember this is NOT peer to peer media. Media will flow through the asterisk server for each call. If your asterisk box is 500ms away from client a, and you call client b who is 500ms away from the server, your audio latency will now be 1sec. This will be noticeable. (Asterisk can tell you what the endpoint latency is)
Things get even worse if you plan on trunking out, as ISPs and cellphone networks have their own latency that you have to add to your own.
It’s important to remember this is NOT peer to peer media. Media will flow through the asterisk server for each call. If your asterisk box is 500ms away from client a, and you call client b who is 500ms away from the server, your audio latency will now be 1sec. This will be noticeable. (Asterisk can tell you what the endpoint latency is)
Things get even worse if you plan on trunking out, as ISPs and cellphone networks have their own latency that you have to add to your own.
is it not webRTC based like google meet, zoom etc.,?
Yes it is WebRTC, but Asterisk is a B2BUA (Back to back user agent) https://en.wikipedia.org/wiki/Back-to-back_user_agent
Which components of this application are peer to peer?
The browser phone is an implementation of SIP signalling, and SDP media negotiation. If you read the PeerConnection documentation you will soon see that what Chrome can do with a PeerConnection is to create an SDP for media negotiation only, nothing else - this is done with ICE, and messages sent between two peers. In this case these two peers are You and Asterisk, even though you intention is to call another user. It's asterisk that creates this link and is designed to stay in the middle of the call. The PeerConnection documentation is clear on this - signalling is up to you. In this case - i have opted for SIP as a protocol for signalling because its an already established protocol and you don't need to re-invent the wheel. If your peers are able to signal each other with another protocol (using some other server signalling protocol), and the SPD could be transmitted directly between the two peers, then media would flow peer-to-peer.
I have not tried, but apparently reSIProcate can do this... but now you are going down a completely different path. Im also looking at something more standard like OpenSIPS.
Please share the architecture of Browser phone app.
setting the audio buffer to long,
How to reduce the audio buffer?
Please share the architecture of Browser phone app.
It's not about the architecture if the browser phone application. It can be used either way you want. It's by far easier to develop a solution based on Asterisk due to the abundance of support and working samples... however this comes at a tradeoff, especially with WebRTC since the media is encrypted - this means changing the path is not possible.
How to reduce the audio buffer?
If you have not set it, it's off, so it will not impact things. https://wiki.asterisk.org/wiki/display/AST/Asterisk+16+Function_JITTERBUFFER
It’s important to remember this is NOT peer to peer media. Media will flow through the asterisk server for each call. If your asterisk box is 500ms away from client a, and you call client b who is 500ms away from the server, your audio latency will now be 1sec. This will be noticeable. (Asterisk can tell you what the endpoint latency is)
Things get even worse if you plan on trunking out, as ISPs and cellphone networks have their own latency that you have to add to your own.
How does distance matter?
i am having the same issue like 1 sec delay as soon as i click answer, the customer doesnt hear me the first second at all so whatever i speak on the first second my voice doesnt reach to the customer, maybe when you click answer somewhere is the delay i dont know. With softphone i don't have any delay at all
The only way to see this delay is with Wireshark on the PC and rtp set debug on
on the Asterisk box. Like this you can literally see every packet, and most importantly where the packets are going or if there is a delay.
Remember that WebRTC is different to softphones in that they typically don't use ICE & DTLS.
The browser phone is an implementation of SIP signalling, and SDP media negotiation. If you read the PeerConnection documentation you will soon see that what Chrome can do with a PeerConnection is to create an SDP for media negotiation only, nothing else - this is done with ICE, and messages sent between two peers. In this case these two peers are You and Asterisk, even though you intention is to call another user. It's asterisk that creates this link and is designed to stay in the middle of the call. The PeerConnection documentation is clear on this - signalling is up to you. In this case - i have opted for SIP as a protocol for signalling because its an already established protocol and you don't need to re-invent the wheel. If your peers are able to signal each other with another protocol (using some other server signalling protocol), and the SPD could be transmitted directly between the two peers, then media would flow peer-to-peer.
I have not tried, but apparently reSIProcate can do this... but now you are going down a completely different path. Im also looking at something more standard like OpenSIPS.
Are you successful in OpenSIPS?
OpenSIPS works very well with browser phone! The solution is very much successful. Signalling works very well. Registration occurs between browser phone and server to hold a “location”, then call setup is then doing directly between endpoints.
There is even an option to have the encrypted DTLS stream converted to a regular RTP stream using RTPengine - this allows calls to route from traditional (non-webrtc) extensions to webrtc. (Just remember tho in this mode the media is sent via the transcoding server and may add latency)
Any solution to send the media directly to the peer?
Any solution to send the media directly to the peer?
To be clear, media will typically be sent peer-to-peer encrypted when using two browsers and OpenSIPS for signalling (and location).
Asterisk will always be in the media path because it’s not a proxy server, and I don’t think there are plans to change this, especially because of DTLS encryption. If your Asterisk server is on site, this probably will not be an issue, however, operating in the cloud will add latency. If this latency is too high, it can cause call quality issues.
Have you setup WebSocket Transport using openSIPS? Can you share opensips.cfg file.
which version of OpenSIPs are you using?
Do you have the documentation on how to setup WebRTC using OpenSIPS?
Do you use rtpengine with opensips? As per the OpenSIPS documentation, OpenSIPS handles the SIP signalling part, media is handled by RTPengine, a high performance media proxy that is able to handle both RTP and SRTP media streams, as well as bridging between them.
Doesn't this cause latency?
RTPengine is only necessary when you are going from DTLS (typical of a WebRTC call leg) to RTP (typical of a traditional SIP call leg). It would essentially transcode the call from webrtc to regular SIP, but again consider also that the media now has to go “through” this RTPengine, and could again cause delay. How much, would be up to the latency of the links. It’s best to avoid this, but sometimes it’s not possible without it.
I've configured opensips and rtpproxy and registered the users using browser phone. There is no audio and video in both the internal n/w and external n/w.
I would highly recommend that you use two regular sip clients (Zoiper is a good example) to debug your traffic and media flow first, and once that’s working correctly, try webrtc
There is a delay in audio reception and video display. How could this be improved.