Open yoavweiss opened 4 years ago
I'm looking at the old use cases, and the recurring theme there was really about detecting "wifi" VS "cellular" (which we know translate to "cheap" and "expensive" - or at least, the two connection types the like 99% users will ever need to deal with).
It might be helpful to have some examples where the buckets would help in the applications there are listed in: http://w3c-webmob.github.io/netinfo-usecases/
I wonder if a lot has changed since I wrote that document 7 years ago... I honestly don't know. But I think it would be helpful to have concrete cases we are trying to solve for now. I have a feeling not too much has changed.
You always have the case of mobile hotspot, appearing as Wifi, but usually metered mobile connection, in that case it can't be detected automatically.
so, the goal is not to determine any of it automatically - just leave it to the use to indicate it via the OS.
It might be helpful to have some examples where the buckets would help in the applications there are listed in: http://w3c-webmob.github.io/netinfo-usecases/
Sure.
In 4.2, Gmail could've used the effective connection type to redirect people to the basic experience without asking them to click and opt-out from the slow experience.
In 5.6, the "slow connection" warning could have benefited from a signal related to connection speed (even if the warning itself doesn't seem all that useful)
5.9 & 6.6 is an interesting one, as auto-playing videos can have a negative impact on both cost as well as the rest of the app (if it'd saturate a slow network). So you could imagine FB implementing autoplay that's conditional on both metered and ECT.
5.11 could have benefitted from using SMS when the network is extremely slow
4.1, 5.1-5.5, 5.7-5.8, 5.10, 5.12, 6.1-6.5, 6.7-6.17, 7.1
4.3 seems to indicate that a "cellular" vs. "wifi" signal may also be useful for non-cost reasons.
So, also revising the use cases apps, the "slow", "medium" and "fast" I suggested is admittedly unhelpful.
The gmail case above seems nice, but risks taking us down trying to solve somewhat specific problem. Maybe better as a "V2" thing?
What it appears that most apps want is just the buckets: "cellular" (all the *Gs), "wifi", and maybe "wired" on desktop, but that case doesn't come up with any app... so "wifi" could include wired connections.
Most apps are using "cellular"/"Wifi" as a proxy to "expensive"/"cheap". I don't think that's a good proxy.
A "metered" indicator (as suggested in #84) can hopefully provide such a signal directly, assuming browsers can extract one from OS-based heuristics, or by directly asking the user.
But I don't think that the cases outlined by GMail and Facebook are overly specific - apps can have high-bandwidth and low-bandwidth "versions" of their entire app or parts of it, and differentially serve them based on the user's conditions.
At Wix we collect (anonymous) performance related information about visitors to all sites built on Wix, including effectiveType. Our purpose with this is similar to additional information we collect, such as device type, browser type, and geo location: to better understand and serve the needs of our users, and their users.
Based on the information we're collecting, we are considering adding automatic graceful degradation of sites UX based on quality of connectivity, estimated device capabilities (based on OS, memory and cores, for example), and on reduced data usage option being enabled in the browser.
The automatic degradations we're considering include:
Obviously such a feature would require realtime analysis of network and device capabilities.
To add to @DanShappir - at Algolia we have started to actively look at how we can surface user recommendations on making improvements with how users interact with our API. As search can be of critical importance and a big factor in determining the quality of user experience (often times, search UI's can take up full landing pages - especially e-commerce or discovery use cases), so we want to surface areas that can be improved.
A simple example of this is fetching less results or in the case of large transfer sizes, adapting the display of search experiences by fetching only the critical attributes of a record.
We have been running our own experiments on some of our domains where we collected anonymous telemetry data combined with latency information and have managed to both decrease and stabilize latency metrics based on heuristics like effective connection type.
For the Akamai mPulse RUM product, we capture effectiveConnectionType
in our analytics beacon, but do not report on it in our dashboards to our customers, by default. However, customers can choose to add custom dimensions for their data, and some are reporting on this.
Some of the feedback that we've heard from our RUM customers looking at this data is that the values of 2g
3g
4g
etc are confusing, especially when they know certain segments of their population aren't really on "cellular" networks -- i.e. why is a Desktop machine connected to a fiber connection reported as 4g
?
We're considering whether to expose this metric by default in our UI, because we do think there is some value in reporting on visitors segmented by their "speed", but we were considering different buckets based on the effectiveConnectionType
values. Something like:
Very Slow (2g)
= slow-2g
and 2g
Slow (3g)
= 3g
Fast (cable/fiber/4g/5g)
= 4g
and if 5g
ever comes aroundSome of the above classifications could also take both ConnectionType and EffectiveConnectionType into account.
Those categories map close to the slow/medium/fast buckets proposed earlier in this thread, but it's more of a higher-level metric that would use CT/ECT.
Note our use case is on reporting and segmenting bulk performance data for analysis that we've gathered from visitors for our customers, not in acting on the information in-page like most of the use-cases in that document are focused on.
So that being said, ECT is a decent signal of "speed", especially combined with CT. Though I'm not sure if also defining 5g
with updated RTT and throughput values would be that useful beyond 4g
. 🤷
Reading through the comments on this thread again, I am inclined to think we should make effectiveConnectionType
report in speed buckets, as proposed above by @yoavweiss.
We could maybe model the buckets along Nielsen's Law of Internet Bandwidth, according to which users' bandwidth grows by 50% per year. Similar to Device Memory, it might make sense to introduce upper and lower bounds.
Doing so would also address the confusion outlined by @nicjansma.
Hey folks! 👋
I'm entirely unfamiliar with the standards process and as such I don't know how well I can contribute to the discussion. My wording will likely be imprecise, but I hope I can still get my point across.
One thought that came to mind in making the API a bit more future-proof would be to use a logarithmic scale for bandwidth and another for latency, instead of trying to combine everything into an effective connection type based on a predefined set of thresholds. The logarithmic values could be rounded down to the unit to effectively create buckets.
Assuming we'd want to use a base 10 logarithm, for a 100ms latency you'd get a factor of 2, whereas for a 9000ms one you'd get a factor of 3.
Similarly, for a 300kbps connection you'd get a factor of 2 (if the logarithm is based on kbps, but it could of course be based on bps), whereas for a 5000kbps connection you'd get a factor of 3.
This could make the API usable and useful over time, no matter what network improvements we see in the long run, without the need to update thresholds every now and again.
Edit: to clarify, do note that the logarithm base and the rounding could be adjusted to suit the range of values that would be useful inside each bucket; using log10 and rounding down to the unit was just the easiest way I could think of getting my idea across.
As suggested by @astearns in https://github.com/WICG/netinfo/issues/91#issuecomment-898762171, wanted to point folks who only track the present issue #85 at https://github.com/WICG/netinfo/issues/91#issuecomment-898377639, where I propose a reboot of the Network Information API.
Thank you for pointing to that discussion, @tomayac! As a performance engineer, I personally really like the new direction 👍
Thank you for pointing to that discussion, @tomayac! As a performance engineer, I personally really like the new direction 👍
Thanks, Sérgio! Should this get implemented, I'm hoping for Automattic to test it :-)
Thanks, Sérgio! Should this get implemented, I'm hoping for Automattic to test it :-)
That definitely sounds like something we could take a look at! I'd love to see these numbers make their way into our RUM data 🙂
As currently defined, ECT tries to mirror existing network characteristics, and maps measured values based on that. That means that a slow network is represented as "2g-slow" and a fast one is "4g".
With the introduction of 5G networks, we would probably need to revise that. Would it make sense to pick a set of values representing different speeds instead? In https://github.com/WICG/netinfo/issues/82#issuecomment-614507398 @marcoscaceres suggested we go with "slow", "medium" and "fast".
An alternative approach would be to expose well-defined buckets: e.g. "up to 100KB", "100KB-500KB", "500KB-2MB", "2MB-20MB", "20MB+"
That would make it clearer what the values should be from an implementation's perspective, and would be future proof (up to a point where we'd decide that the highest value bucket is not granular enough).