Open dbkr opened 1 month ago
For the cookie approach: doing round trips isn't great if we can avoid them. Maybe we set the cookie on /sync
(and via other similar endpoints for the non-syncing clients) as "things the client is already doing". That could allow the media download to continue being one theoretical round trip.
Implementation wise: retrospect on MSC3916 is it should have had more implementation to prove it works in a wider variety of situations. Authenticated media v2 would likely need a higher bar than MSC3916 to avoid a v3 iteration.
Hm have the Browser clients considered plain old fetch instead of Service workers? Like for example https://alphahydrae.com/2021/02/how-to-display-an-image-protected-by-header-based-authentication/
This should work in Browsers and not have the flaws of the Service worker I believe.
That's what's done for encrypted media, but it's not great because the media needs to live in memory.
Regarding the proposal to use Cookies, in MSC3916:
Cookies are a plausible mechanism for sharing session information between requests without having to set headers, though would be a relatively bespoke authentication method for Matrix. Additionally, many Matrix users have cookies disabled due to the advertising and tracking use cases common across the web.
This word allows me to personally to oppose using cookies.
Also this kinds make me wonder how to design authenticated media APIs in regard, maybe also in relation to the tight schedule of the origianl authenticated media spec.
Hm have the Browser clients considered plain old fetch instead of Service workers? Like for example https://alphahydrae.com/2021/02/how-to-display-an-image-protected-by-header-based-authentication/
This should work in Browsers and not have the flaws of the Service worker I believe.
This is what I was talking about in my second paragraph ("treat unencrypted media as we do encrypted").
This word allows me to personally to oppose using cookies.
Which word in particular? I think this paragraph is a little misleading. When we talk about "disabling cookies" that means a lot of different things. Usually, a browser will delete cookies after a period of time, or they reject third party cookies. Cookies are not really much different to localstorage which is a feature we rely on. However, a browser that rejected cookies entirely would not be able to view media, whereas at least in theory a client could work without localstorage, keeping everything purely in memory.
Maybe we set the cookie on /sync (and via other similar endpoints for the non-syncing clients)
Yep, this is certainly an option and would avoid having an extra endpoint, we just get a unnecessary cookie header for things that aren't web, and some edge cases where medias doesn't work if you don't sync for some reason..?
All that said, personally, I'm coming around to the option of using timed URLs, where we have a specific C/S endpoint to generate the URLs. This does go against Travis's point of unnecessary roundtrips since it would force an extra round-trip to the HS, which is a little silly in the (common) case where the HS and the media repo are the same thing. However:
...so it ends up being quite a simple extension to the current spec to add an endpoint for web clients to get pre-authed URLs for media that they can put straight into the DOM. That is, you can either use client/v1/media/download
with an Auth header to get the media directly or you can use (for example) client/v1/media/download/link
(still with an Auth header) to get a pre-authed URL (that does not need an auth header) that you can put in a src
tag.
Still more research needed, but those are my thoughts right now.
This word allows me to personally to oppose using cookies.
Which word in particular?
I mean the entire paragraph I quoted from MSC3916, but thanks for your opinion.
All that said, personally, I'm coming around to the option of using timed URLs, where we have a specific C/S endpoint to generate the URLs. This does go against Travis's point of unnecessary roundtrips since it would force an extra round-trip to the HS, which is a little silly in the (common) case where the HS and the media repo are the same thing.
I'm mostly in the same position.
Also note, during my work on Cinny, I found that Firefox won't issue service worker requests for <video>
and <audio>
src tags, I'm not sure why it was like that, and I'm not sure the Chrome / Safari behaviour.
Time-limited URLs may have a downside where users copy/paste things from their browser and expect it to work. Eventually the request would fail, and leave the user confused.
Picking a recommended time may also be a challenge.
We should look at what Discord does for this. They've set expectations with their users already.
We should look at what Discord does for this. They've set expectations with their users already.
Last I checked, Discord links had a long expiry (~2 weeks). They also allow pasting their links into other chats and will embed the content regardless of the link expiry. However, pasting media links into Matrix chats isn't common practice like it is on Discord, so I don't think there's any need to try to copy that part.
That's what's done for encrypted media, but it's not great because the media needs to live in memory.
BTW, I think the service worker approach already broke caching. Avatars in Element Web on Firefox reload every time I switch spaces
Another alternative: cache the image via plain old fetch into OPFS and then load it from there. That's what Trixnity does.
That's what's done for encrypted media, but it's not great because the media needs to live in memory.
BTW, I think the service worker approach already broke caching. Avatars in Element Web on Firefox reload every time I switch spaces
I've noticed this too, but browser devtools say the requests are being served from local cache rather than server. This is confirmed with server logs too.
I think for whatever reason the browser just hits a slower cache. Would be something to discuss outside the spec.
I am similarly displeased with the current implementation: I have attempted to implement media streaming (for slow connections), but ended up with javascript/DOM race conditions instead... I have not used web workers for this as this adds a non-trivial amount of complexity (attributing img tags to the correct user session, ...)
Just one more idea about the originally mentioned "Timed Query Params" pseudo-JWT. If you also hashed the client IP into the mix along with the time and the server refused to serve the media to anyone else based on this given claim, it could also solve the issue of people accidentally copying the URL and sending it over to somebody. Or that rogue web server admin snooping around in the access logs just so they can download that user avatar.
Adding in here as it may be relevant, or someone might be able to help, but authenticated media doesn't appear to work in the Tor Browser Bundle. I'm using app.element.io.
I'd use the desktop app but there are many issues with it that makes it unusable for me at the moment. (Raised elsewhere)
the Tor browser is likely operating as a private browser, which is the original reason for this issue being opened.
Problem
We've recently rolled out authenticated media. Whilst this has been broadly successful, it requires that clients send Authentication headers when making media requests. This is problematic for web clients where the request for the media usually comes straight from the browser and therefore needs to be intercepted by a service worker (or similar on electron etc) to add the header. Critically, this means (unencrypted) media simply doesn't work anywhere service workers aren't available. At time of writing, this includes Firefox in Private Browsing mode which is a nontrivial amount of users.
The only workaround would be to treat unencrypted media as we do encrypted and fetch it separately with a fetch request, then load it into a blob to display. This wouldn't work well for large files though.
Proposal
We can either accept this situation as it is (and ignore Firefox Private Browsing unless/until they allow service workers) or we can introduce a different method.
Advantages of changing are:
Disadvantages:
I've had a chat with spec core team members on this, and the main alternatives we came up with were:
Cookies
Authenticate to the media repo by sending a regular cookie, like in the 2000's. Non-web clients would likely just be able to set the Cookie header the same way they set the Authentication header. Web clients would need a little work to avoid relying on third-party cookies. The most plausible way to do this may be to add an endpoint on the homeserver that's authenticated via the regular Authentication header but sets that same token as a cookie. A browser can then call this endpoint to set the cookie in the browser, meaning that it will be sent on all requests to the homeserver from that point on (so non media requests might end up with double auth, or perhaps we'd rely on using the cookie header for all C/S requests?)
Timed Query Params
Add a query parameter to every URL that contains the auth. This would likely be some kind of hash like
token_id+time+h(token+time)
so the server could check the token was valid and used within whatever time window it required. Clients could lie about the time but the URL would only be valid for x time after the given time.Alternatively, the client could do an extra request to the HS to get a URL that would give it media directly. This would allow the HS to do whatever auth semantics it needs (linked media etc.) and would be broadly equivalent to an HTTP redirect (or even faster if we did them in batches).
Next Steps
We think, to move forward on this, we would like: