w3c / web-networks

Web & Networks Interest Group
https://www.w3.org/web-networks/
19 stars 7 forks source link

Network : Application usage of network condition predictions #16

Open sudeepdi opened 4 years ago

sudeepdi commented 4 years ago

Application Domain: Media Streaming, Online Gaming, etc

Description

Link Performance Prediction (LPP) - Bring network awareness to the application

Example

Media streaming with LPP Network conditions prediction used by Media application to take actions in advance to ensure user has a smooter QoE when going through areas where network quality is poor.

Challenges

Different behaviors / strategies to apply vs. desired behavior – Stream specific parameters: min/max buffer, quality, etc. – What behaviors should be active – How aggressive should each behavior be •Streaming strategy config – Manifest file as a way to config different behavior profile – Also consider in-band and out-of-band events to feed predictions

How applications perceives network

Current Prototype of concept

MPEG/DASH and LPP – what has been done

Modified Dash ref lib/player, version 3.0.0. – https://github.com/Dash-Industry-Forum/dash.jshttp://reference.dashif.org/dash.js/v3.0.0/samples/dash-if-referenceplayer/index.html – Tests has been done both on mods in ref lib and directly in player, both works with some respective pros/cons.

LPP changes tunes the bufferTarget – src/streaming/rules/scheduling/BufferLevelRule.js

If any member is interested to engage to discuss potential solutions and evaluate proposals, feel free to use this thread. Also, do tag other folks who might be interested.

chrisn commented 4 years ago

I have a few questions about LPP that we didn't have time to cover in the joint meeting between the Web & Networks IG and Media & Entertainment IG (minutes here).

  1. The presentation covered developer tooling, but is there also still interest in providing browser APIs to make the prediction information available to web apps?

  2. Am I right in thinking that there are two possible deployment approaches for LPP - one that's integrated into the operator's network, the other that is run as a service on top of the network? If this is the case, how do these compare in terms of the input data to the predictions and user privacy?

  3. How does LPP operate where there potentially multiple network operators between the end user and the application (e.g., streaming) service?

jsvennebring commented 4 years ago

Hi Chris, sorry for late reply, good questions,

  1. The work is still moving forward on APIs for predictions towards the applications, that is the main goal. The dev tool that we presented here is merely a spin-off from some internal tools we have that we thought could be of wider interest.

  2. The best way to run LPP is in an operator network, that gives best predictions as there is much more data available for the ML algorithms and also handles the privacy aspects in a nice way since the operator is already trusted with that data. However, we are starting to think that it might perhaps be good to have a "backup" solution for the cases that the operator does not have this functionality. To give at least some rudimentary information and ensure the APIs at least returns something of value. That "backup" solution that we currently refer to as a "Global LPP" service would be less exact (since it has less data to work on) and have some privacy implications as it needs some basic input to make the predictions, e.g. GPS data, network id. The benefit is however that is also works on any network such as wifi etc.

  3. It works but with less precision. It currently works ok unless you are pulling data from far away but we anticipate that this issue will grow in coming years as the air interface and nearby network increases dramatically in throughput. On the other hand, the trend is also for more local datacenters and CDNs, so it might not be a big issue. However, there are ways to solve it and if needed be it is something we can go after.

acbegen commented 3 years ago

Pretty much all streaming clients (running over HTTP) work more or less as follows: 1- The client requests an object (e.g., a media segment) 2- Computes the download speed of that object (measurement stage) 2a- Computes a smoothed value for the download speed (smoothing stage) 2b- Uses the past values and a model to predict what the download speed will be during the next download (prediction stage) 3- Decides on which object to request next (and when), goes back to (1)

Step 2a is optional, and most implementations use a moving average. Step 2b is optional, and most implementations use the value from 2a directly (simple prediction), though there are some pretty good learning models for more accurate prediction.

I think LPP can provide input to step 2b, although one can also use the value suggested by LPP as the output of 2b. LPP should never mess with step 3.