Closed foolip closed 7 years ago
There's no isolation. Extracting a few relevant sections from the spec:
The attacker can use JavaScript to observe the duration (e.g. time from start of fetch to onload event) of any network fetch on the client, and may get more detailed timing data about the same fetch via the Resource Timing API. ... If the attacker can initiate or observe a network fetch of any kind from the client, then they can observe its performance characteristics and how they change over time.
For example, attacker can inject img
elements pointing to arbitrary resources and use timing information to estimate RTT+bandwidth against arbitrary origin.
Pushed a small update to call this out explicitly: https://github.com/WICG/netinfo/commit/16a38a0d4c54b75c563892cb7909534a816709c3
Does the above make sense / address your concern?
For example, attacker can inject
img
elements pointing to arbitrary resources and use timing information to estimate RTT+bandwidth against arbitrary origin.
Does that not assume that the size of the resource is known, which might be what one is trying to determine? Something that comes to mind is the obfuscation of resource sizes mentioned in https://groups.google.com/a/chromium.org/d/msg/blink-dev/frMdM1H8jJ8/ApV1uelFBgAJ. If the resource size is not known, it sounds like one repeatedly fetch the same resource in img
elements and measure the time. Then use the netinfo API and assume that it reflects the RTT+bandwidth to the that origin, and calculate the file size.
Is that at all plausible?
https://github.com/WICG/netinfo/commit/16a38a0d4c54b75c563892cb7909534a816709c3 LGTM though, thanks!
Does that not assume that the size of the resource is known, which might be what one is trying to determine? Something that comes to mind is the obfuscation of resource sizes mentioned in https://groups.google.com/a/chromium.org/d/msg/blink-dev/frMdM1H8jJ8/ApV1uelFBgAJ. If the resource size is not known, it sounds like one repeatedly fetch the same resource in img elements and measure the time. Then use the netinfo API and assume that it reflects the RTT+bandwidth to the that origin, and calculate the file size.
For RTT + downlink estimation: you can fetch a non-auth'ed resource (every origin will have one, even if its an error page asking for login) from any origin and get a good estimate of RTT and throughput based on your knowledge of expected size of the non-auth'ed response. Plenty of JS "bw/rtt estimator" libraries do exactly that.
Knowing the above, you can then fetch an auth'ed resource (e.g. via img
) from same origin and use recent RTT+BW estimates + loading time of auth'ed resource to back out an estimate of its size. As such, the delta between what's possible today and what you might estimate from exposed rtt+downlink signals is small. That said, paging @jkarlin for a sanity check.. perhaps there are additional precautions we should consider here. Josh, any thoughts or comments?
As far as guidance to developers goes for mitigating these types of attacks.. we come back, once again, to Stop Cross-Site Timing Attacks with SameSite cookies.
I agree that for a given origin you can measure the bandwidth yourself with resources of known size. Though it's perhaps simpler with RTT+BW measurements because you can target an arbitrary site without needing to know the sizes of resources a-priori.
There is also concern that RTT+BW could reveal navigation history. For instance, if the RTT is very low, then the client has recently been spending time on intranet sites.
Though it's perhaps simpler with RTT+BW measurements because you can target an arbitrary site without needing to know the sizes of resources a-priori.
Practically speaking, I think that's a very low bar: pick any static asset such a logo, or an error page, and you'll have a stable baseline.
There is also concern that RTT+BW could reveal navigation history. For instance, if the RTT is very low, then the client has recently been spending time on intranet sites.
Hmm. Perhaps we should exclude requests to private subnets.. @tarunban wdyt?
Excluding requests to private subnets sounds like a good idea. Filed https://bugs.chromium.org/p/chromium/issues/detail?id=731797 for tracking Chromium work.
Does RTT also reveal proxy information? E.g., if I'm onion routing then my RTT will presumably be considerably higher than my RTT to the page being loaded.
There is undoubtedly some navigation history information being leaked, though I don't know how many bits.
Isn't that already the case? The origins can compare transport layer RTT (from the socket endpoint at the server) to the RTT of the page being loaded.
Does RTT also reveal proxy information? E.g., if I'm onion routing then my RTT will presumably be considerably higher than my RTT to the page being loaded.
That would apply to all requests, so high-RTT alone doesn't distinguish between slow connection vs. proxy connection. Also, thinking out loud.. If you have a cooperating server the client and server can make local observations and infer some properties (e.g compare server TCP RTT vs client observed end-to-end RTT) of the connection, like presence of an intermediary or the fact that you may be using Tor. That said, there are other interesting implications to consider here..
Chatting w/ Tarun out-of-band, some notes and thoughts:
downlink
is measuring end-to-end (application / HTTP-layer) throughput.rtt
can be defined to measure either transport RTT or application / HTTP-layer RTT.
There are pros and cons for each of the above. In the transport-RTT case the implicit assumption is that it's likely to be the most limiting hop, however it does expose new information that was previously not available (e.g. TCP RTT to VPN/Tor proxy), and there is a mismatch with downlink
definition. On the other hand, HTTP-layer RTT maps directly to what you can observe today by making a request, but is subject to other "noise" like server think and response times, etc.
Based on experiments Tarun ran, transport-RTT did provide better precision and recall than application-RTT, which is why we proposed it here. That said, we can probably improve application-RTT estimates, and in light of the above.. I think there is merit in aligning rtt
definition to use application-RTT: consistent definitions, minimize exposure of new information.
In summary, I'd propose the following updates:
rtt
WDYT? Is this a step in the right direction?
Both ideas sgtm.
Opened https://github.com/WICG/netinfo/pull/62 -- @tarunban @jkarlin ptal.
@foolip ditto, ptal at the pull request. Does it address your questions here? Anything else we need to tackle?
Commented on https://github.com/WICG/netinfo/pull/62 on something outside the scope of this issue. https://github.com/WICG/netinfo/commit/16a38a0d4c54b75c563892cb7909534a816709c3 still LGTM and I have nothing further, so OK with me to close this issue.
I'm not a networking expert, but from https://wicg.github.io/netinfo/#privacy-considerations alone and searching for keywords like "origin" and "end-to-end" in the spec, it's not super clear whether the
downlink
andrtt
attributes are supposed to be the same for all origins (tabs, roughly) or if there's any kind of isolation intended.For example, is there anything that evil.com can learn by including non-evil.com in one or many iframes and observing? Even if deemed acceptable privacy risks, spelling out any implications for cross-origin information leakage, and not just information about the user's network characteristics, would be good.