jitsi / lib-jitsi-meet

A low-level JS video API that allows adding a completely custom video experience to web apps.
Apache License 2.0
1.33k stars 1.11k forks source link

[P2P] Is Jitsi using AdaptiveBweThreshold? #1960

Closed laurenzfg closed 2 years ago

laurenzfg commented 2 years ago

This Issue tracker is only for reporting bugs and tracking code related issues.

Before posting, please make sure you check community.jitsi.org to see if the same or similar bugs have already been discussed. General questions, installation help, and feature requests can also be posted to community.jitsi.org.

Howdy!

I hope this bug report fits in this repo, since it may be somewhat of a feature request / product feedback. I am investigating the Congestion Control traits of Jitsi; I have filed #1871 with you already. I am writing my final thesis at University on this subject; So your input would be greatly appreciated.

Description


Is Jitsi taking advantage of the adaptive threshold proposed in the research paper [1] implemented in trendline_estimator.cc? This adaptive threshold is crucial when a Jitsi call is competing against TCP (CUBIC / Reno) on a bottleneck router. This might be a typical usage scenario of your users: They want to have a call from their hotel room, but their neighbors are all streaming TCP-controlled Video on Demand, e.g. Netflix. The paper [2] showed that the original WebRTC + Google CC Stack will starve in that situation; The users will be unable to call with great quality. With the adaptive threshold, the WebRTC flow should get a fair share of the available bandwidth and thus the user might be able to call with a sufficient call bandwidth [1]. So there would be great use in employing the adaptive threshold!

But I am not entirely sure if the adaptive threshold is currently in-use in Jitsi (cf. current behaviour). Jitsi doesn't get as much bandwidth when competing with TCP CUBIC as expected. Moreover, Signal, who is also using webrtc.org, gets the same poor bandwidth as Jitsi does. But from [1], you'd expect that Jitsi gets a fair share of the bandwidth when competing against TCP CUBIC. Interestingly, Google implemented other values for kup and kdown, then what was proposed in [1]. The paper [1] proposed (ku,kd) = (0.01, 0.00018), the webrtc.org uses (ku,kd) = (0.0087, 0.039) trendline_estimator.cc.

Do you have any capacity to dive into this? I believe higher bandwidths when competing against TCP would improve the Quality of Experience of Jitsi's users.

[1]: Carlucci, Gaetano, et al. "Analysis and design of the google congestion control for web real-time communication (WebRTC)." Proceedings of the 7th International Conference on Multimedia Systems. 2016. https://mmsys2016.itec.aau.at/papers/MMSYS/a13-carlucci.pdf

[2]: L. De Cicco, G. Carlucci and S. Mascolo, "Understanding the Dynamic Behaviour of the Google Congestion Control for RTCWeb," 2013 20th International Packet Video Workshop, 2013, pp. 1-8, doi: 10.1109/PV.2013.6691458. https://c3lab.poliba.it/images/c/ce/Gcc-pv-2013.pdf

Current behavior


Jitsi starves when competing against a TCP CUBIC Flow. The bottleneck has a large buffer of 2xBDP, BW: 2MBit, RTT: 50ms. Jitsi is green line, TCP is blue line.

Jitsi_rtt50ms_pfifo84_20220309-13h48m37s

Expected Behavior


Reviewing the research [1], Jitsi should be able to get a fair share, i.e. 1MBit

Possible Solution


Figure out why the real life implementations is significantly worse than the research implementation. Is the adaptive threshold not used at all? Is the implementation broken / wrongly parameterized? Is the adaptive threshold just not as good as the research proposed?

Steps to reproduce


Set up a P2P call between two Android Jitsi clients using an environment like the one sketched below.

Environment details


Lab setup. Two Android Emulator Phones are in the same IPv4 only LAN behind a NAT. A concurrent TCP flow is sent from dedicated computers. The flows run through a Token Bucket Filter in order to limit the bandwidth to 2Mbit. See picture: net

damencho commented 2 years ago

Just some notes:

laurenzfg commented 2 years ago

Thanks @damencho I am aware of that. My measurements are all P2P and hence use the Google Congestion Control rate controller. I was unsure whether I wanted to raise this issue with you or with the webrtc.org tracker. As I measured that your product performance diverges from the WebRTC research, I chose to raise with you. I posted here as https://github.com/jitsi/webrtc is just used by one engineer (@saghul) has no Issue Tracker enabled.

I do believe that the problem is in WebRTC code, as I noticed the very same issue when measuring Signal P2P calls. I raised the issue with you since I figured you might be the experts for the WebRTC code base and interested in improving the product performance.

damencho commented 2 years ago

The jitsi/webrtc code is fork webrtc and is used for mobile builds. In my opinion, any fixes should be done in that code, but not in the fork but upstream.

fippo commented 2 years ago

@laurenzfg the discuss-webrtc mailing list is the right place but it is hard to get definitive answers there (and beware, the moderation for first-time posters takes more than a week at times), the tracker isn't.

In general the BWE is one of the more opaque areas of libwebrtc and I haven't seen much public discussion on the tuning that happened over the years even.

(make sure to look into some of the work done by @mengelbart too, even though not libwebrtc related)

laurenzfg commented 2 years ago

Thx for the pointer @fippo , I'll turn to the mailing list :)

fippo commented 2 years ago

did you post? I may have someone interested in replying :-)

laurenzfg commented 2 years ago

I had to actually backlog this since I have to finalize my thesis and don't have time anymore to e mail back-and-forth via a moderated mailing list.

laurenzfg commented 2 years ago

https://groups.google.com/g/discuss-webrtc/c/CuTQ3Lah4OU