w3c / ServiceWorker

Service Workers
https://w3c.github.io/ServiceWorker/
Other
3.63k stars 315 forks source link

Remove foreign fetch #1188

Closed jakearchibald closed 6 years ago

jakearchibald commented 7 years ago

Discussed in https://github.com/w3c/ServiceWorker/issues/1173.

Due to problems with double-keying, unclear trial results, and unclear use-cases, we're going to remove foreign fetch from the spec (and fetch spec).

We can reexamine use-cases later and look to reintroduce it in another form once we have better data.

Jxck commented 7 years ago

what is double-keying and why that disables foreign-fetch ? (I'm +1 for foreign fetch, Separation of Concern in service worker)

mfalken commented 7 years ago

Double-keying refers to the system some browsers do for separating data set by a cross-origin iframe from the top frame, something like this: https://bugzilla.mozilla.org/show_bug.cgi?id=565965

Jxck commented 7 years ago

thanks @mattto, but how this suffers foreign-fetch ?

jakearchibald commented 7 years ago

One of the use-cases we had was font caching, where fonts.google.com would have its own service worker that handled its own caching strategies.

With double keying, when example.com uses the fonts.google.com foreign worker, it has storage and execution keyed to example.com+fonts.google.com. When jakearchibald.com uses the same foreign worker, the storage and execution is keyed to jakearchibald.com+fonts.google.com.

This results in the same fonts being stored multiple times, for each combination.

Jxck commented 7 years ago

@jakearchibald make sense, thanks!

jakearchibald commented 7 years ago

A foreign-fetch use-case: Kinda polyfilling something like cache digests https://twitter.com/mjackson/status/901090486739378177.

Double-keying would somewhat get in the way here, although you'd be able to send details of the double-keyed storage back to the CDN.

Jxck commented 7 years ago

is there potensial alternative for foreign-fetch?

Third party like Ad probider, analytics, CDN seems has a use cases, and I'm waitkng for foreign-fetch for avoid/separate handling fetch of these 3rdP request.

jakearchibald commented 7 years ago

@Jxck I haven't heard a proposal that meets the use cases while preserving privacy.

mkruisselbrink commented 7 years ago

+1 to removing foreign fetch from the spec. Maybe less clear what to do about Link: headers and elements for installing service workers? Either remove completely as well, or limit to processing these only on top-level (documents/workers) loads?

annevk commented 7 years ago

Please also ensure tests are updated and browser bugs get filed. See https://github.com/whatwg/fetch/pull/596#issuecomment-329368947 for details.

rektide commented 7 years ago

One example use case I was working towards- I wanted to build a audioscrobbling service. It would let any media player register that the user was viewing something. Without foreign fetch, this is impossible to do in an offline case.

I was also working on a library 0hub to fulfill some of my early hopes for navigator-connect, which is enabling discovery. Rather than have to know about my audioscrobbling service, I was hoping to make a service where other services could register. My scrobbler could register itself, as could other scrobblers, and then anyone who wanted to post scrobbles could query for any scrobbling services and push to all of them.

Later down the road I intended to implement a feed reader around this premise.

This is truly one of the saddest things I have ever heard for the web, my lightyears & lightyears. Offline will be savagely ruined, not web at all, if we can only work offline with ourselves. The web has to have some kind of functional, interconnected offline capability. It has to.

I totally would not expect radical tech like this to have fast uptake. It needs half a decade of people playing around with it and learning about it and mainstreaming and libraries. We barely have service workers. Please, let new trials begin. Soon. This is incredibly deeply saddening to hear of.

mjackson commented 7 years ago

Thanks for posting my tweet here, @jakearchibald šŸ˜…

Just wanted to chime in and say that I think the possibility of using foreign fetch to improve caching behavior for a CDN like unpkg.com is really appealing, especially with the advent of web modules. FF would make it possible to build better support for a module-level cache.

phamann commented 7 years ago

Maybe less clear what to do about Link: headers and elements for installing service workers?

I would like to also echo this, whilst I'm not too concerned about loosing foreign fetch for the time being. The ability to install a Service Worker via a Link header opened a lot of very interesting potentials for delivering dynamic client-side caching logic via CDNs that act as a proxy for the first party domain, without having to compromise security or mutate html document responses. The above mentioned cache-digests polyfills is one such example.

rektide commented 6 years ago

This can be closed, as per #1207, I believe. But I would very much like a clearer path on knowing what challenges are to re-open it, and to hear thoughts on what can be done to help advance this fantastically hugely important capability that greatly facilitates & is necessary for a useful offline web.

annevk commented 6 years ago

You'd need to come up with a way of adding them without making tracking worse.

jozanza commented 6 years ago

Does anyone know the status/direction of foreign fetch? It still seems incredibly useful, and I'd love a chance to build something with it. Are there plans for future Origin Trials in Chrome? Or is this feature entirely deprecated with no plans to be implemented any longer?

mfalken commented 6 years ago

Chrome has no plans currently to reimplement foreign fetch.

jakearchibald commented 6 years ago

@jozanza what are you wanting to do with it?

jozanza commented 6 years ago

@jakearchibald unless Iā€™m misunderstanding how it works, foreignfetch seems like a huge boon to webrtc. Iā€™d want to use it to cache offers/answers for RTCPeerConnection signaling. And once connected, it could also be used to scalably relay media streams without a CDN.

jakearchibald commented 6 years ago

@jozanza Are you speaking as the person who'd own the RTCPeerConnection signaling server, or the person who'd run the site using the RTCPeerConnection signaling server?

jozanza commented 6 years ago

@jakearchibald I started writing and was having a hard time describing what I thinking clearly. So I wrote some pseudo code here. It shows more or less what I was hoping would be possible:

async function sendMediaToPeer({ config, from, to }) {
  // Create a peer connection
  const pc = new RTCPeerConnection(config);
  // Get user's video/audio stream
  const stream = await navigator.mediaDevices.getUserMedia({
    video: true,
    audio: true
  });
  // Do all the signaling with a foreign fetch service worker šŸ¤ž
  pc.onnegotiationneeded = async () => {
    // Create offer
    const offer = await pc.createOffer();
    // Add media stream
    pc.addStream(stream);
    await pc.setLocalDescription(offer);
    // Gather all ice candidates for simplified signaling
    while (true) {
      if (pc.iceGatheringState === "complete") break;
      await new Promise(f => setTimeout(f, 100));
    }
    // Send offer, gets intercepted by service worker
    // The worker can store the offer with the Cache API
    const res = await fetch(`${API_ROOT}/${from}/offers`, {
      method: "POST",
      headers: { "content-type": "application/json" },
      body: JSON.stringify({ to, offer })
    });
    if (!res.ok) throw new Error("Could not create offer");
    // The offer is now cached in the service worker so
    // We can just poll for answer from the intended peer
    // (They would use the service worker to post their answer)
    while (true) {
      const res = await fetch(`${API_ROOT}/${to}/answers`);
      const answer = await res.json();
      if (answer) {
        // Aaaand we're connected! :)
        pc.setRemoteDescription(answer);
        break;
      }
    }
  };
}

tl;dr It'd be pretty amazing, if all of the signaling between peers could be done in a serverless manner by relying on a common foreign fetch service worker storing offers/answer with "a single, authoritative cache instance".

And the signaling code could obviously be even cleaner if the foreign fetch service worker also supported onmessage() and Client.postMessage(). It could almost be a replacement for WebSocket server at that point. But again, I'm not sure if I totally misunderstood what foreign fetch is intended for and capable of. Admittedly, this all just seems too good to be possible šŸ˜….

joymon commented 6 years ago

Does this mean if my site is in example.com and consuming api.example.com, the API requests cannot be intercepted by fetch due to cross domain limits?

annevk commented 6 years ago

They can be intercepted by example.com, not by api.example.com.

daviddias commented 5 years ago

Very sad to hear that this has been dropped. We, the IPFS Project, were super excited about this feature -- https://github.com/ipfs-shipyard/ipfs-service-worker/issues/5 --

With double keying, when example.com uses the fonts.google.com foreign worker, it has storage and execution keyed to example.com+fonts.google.com

Making the the base storage to be content address vs kv would enable the browser to avoid false cache misses.

and unclear use-cases

Foreign Fetch would enable an IPFS Node to run on a Service Worker and become the ideal Content Addressed CDN.

You can try a taste of this with by visiting https://js.ipfs.io

image

Once enabled, a js-ipfs node is spawn in a Service Worker and any request to js.ipfs.io/ipfs/SomeHash will be routed through the js-ipfs node itself. With foreign fetch, we would be able to load webpage assets from js.ipfs.io and if the Service Worker was installed, the browser could cache it or even better, serve it locally to other browsers, if the Service Worker was not installed, it would it one of the IPFS Gateways.

I believe this to be a fantastic use case for foreign fetch as it would enable Web Assets to be loaded through a DWeb Protocol that verifies their integrity and can serve the same assets fetch to close peers. Would love to have your opinion and know if there is still time to make the case.

hatgit commented 5 years ago

In case it helps this issued be considered for re-opening, when one desires that a web-app work on a standalone basis with all code in-line and no additional files required (i.e. a service worker javascript file) as currently a service worker cannot be installed unless a separate file is used (unless I'm mistaken and the code contained in that script could be added inline to the .html file - which is what I am trying to do).

https://stackoverflow.com/questions/47163325/register-inline-service-worker-in-web-app?rq=1

P.s. I was experimenting with trying to also get the json manifest inline. The application I am using is browser based but works the same whether online or offline so it should meet the definition of a progressive web app. However, not being able to add the service worker without creating additional files is causing certain tests to fail when running a Light house audit in Chrome.

chadananda commented 3 years ago

Seems like this would have been a great way to give web apps the ability to fetch content from mirrored sources if the primary source is down or blocked.

The service worker should be able to work its way down a list of mirror domains, perhaps falling back to something like an IPFS request when all mirrors fail. Perhaps there should be an exception, allowing this to be done for assets which provide a content hash or are otherwise uniquely named, so as to not require further keying. If it's a security concern, then require the asset name to match the content hash and enforce a check before updating the cache, IPFS-style.

This would enable dramatically more robust apps which might take advantage of P2P networks to fetch resources locally when available.

jakearchibald commented 3 years ago

I don't see how that could work now that browsers bucket storage & cache per origin.

rektide commented 3 years ago

Storage security does limit a lot of use cases, but I continue to believe very strongly that there's a ton of use cases for foreign fetch that make sense with per-origin sharding.

For example, if I want to make an offline-capable recently-listened/"scrobbler" service that your/any website can post currently listening music to, what are my options? I can use something custom & fancy like Commlink & it's implicit internal protocols to message an iframe. But I'd way rather, when I load my music player, give it a URL it can use to post tracks into. They can do whatever login flow is needed, and from then on, your music player app would be able to use foreign fetch to post new tracks or get a list of it's own recently listened to tracks from my recently-listened service. The SW can store & foreward this data.

The architectures enabled by Foreign Fetch are so much better than the alternatives, make the web really offline capable. I hope beyond hopes we can stop saying that per-origin caches make Foreign Fetch pointless. Yes, that does obstruct some use cases, but for many, re-downloading the data, re-logging in is not a problem at all, and the architecture of it is 100x "more web", more http-centric, versus having to get fancy. Developers know HTTP. Please, let's give them the capability to use HTTP across origins via ServiceWorkers.

restspace commented 3 years ago

Foreign fetch could be used in development to simulate a server running within the browser. For a runtime like Deno, the service worker and serverside code would be almost the same. The locally running code could be debugged using browser tools. Generally speaking, I would imagine there could be many applications where this could be useful. You could view it as extending edge or 'originless' computing right into the browser.

mircerlancerous commented 3 years ago

Need this to be put back into the spec in some fashion; the sooner the better.

Trying to build a single sign on (SSO) solution that relies upon a service worker at an SSO address; other services can query the SSO URL and receive a response from the service worker as to the user account in the browser. This offline functionality is required to give a browser-based solution the same capabilities of a native application.

Aside from my SSO project, I can see limitless potential in offline browser-local APIs powered by service workers. There are numerous examples in previous comments; what is needed from me or them to get this back into the limelight?

JEduardoPeralta commented 2 years ago

i want to know how i can acces to images from my server Mv3, i can't get access to my images on local network. Do i need permission? or something?