Open nikeee opened 2 years ago
Sorry for the late response, I've actually been thinking about this for a while, and you're correct I'll get right on it
Thanks for your reply! I just had a thought on that and noticed that it might not be that easy if the user closes the initial tab.
Can I help somehow?
Sure, I'm about to have a coding sesh, you can join if you'd like
My issue is similar to this.
Right now, I'm thinking of creating a "SharedWorker" within ServiceWorkers by only allowing required actions (in my case, network requests to the backend server) to be performed by the ServiceWorker associated with the first opened tab for an origin. The other tabs will register the requests in an IDB store request-queue
via Dexie.js (which I'm already using), then periodically the master Service/SharedWorker will process the requests in one batch request (with server-side code handling the various requests and returning them in JSON which can be parsed by the Master SW and saved in a queue-responses
IDB store. The other tabs will receive updates via the LiveQuery mechanism introduced in Dexie 3.2. It seems like it'll work but its clearly an overly-complicated kludge.
I'd be curious to see what you guys can come up with for this - perhaps BroadcastChannel would be a simpler mechanism! Though, I suspect my method will be necessary for my needs, since I'm not so much trying to sync tabs as consolidate unique network requests into a single batch request.
Edit: Now that I think of it some more, I'm not even sure a SharedWorker could even do what I'm looking for. I just figured through some async/await mechanism the ServiceWorkers would just wait for the SharedWorker to process the various requests. But it really might just be necessary (or at least prudent/robust) to have that IDB queue in the middle...
Edit 2: Or maybe it actually is all quite similar - LiveQuery uses BroadcastChannel for part of its magic. Hopefully my ramblings here provide some spark!
@nickchomey Why not just use cache storage to store responses and check if a response has already been cached, it's what I used for bundlejs.com
Thanks for the suggestion!
I already am planning to use cache storage or, more likely, IndexedDB to store responses. This particular mechanism is for a periodic call to the server to check for any updates (its a highly dynamic site with social networking etc...) and each tab will be looking for different updates. So, rather than have each tab sending a request every minute (on its own schedule, depending on which second of the minute each was opened), I'd like to consolidate all requests into a single request that is sent to the server at, say, 0 seconds of each actual minute of the clock.
Benefits of this are:
Perhaps someday I'll implement some WebSocket mechanism that'll obviate the need for all of this, but it won't happen for now. And, moreover, perhaps someday I'll implement a sort of Headless WP that purely uses REST API or GraphQL, but again not for now.
So, this seems like a reasonable approach. In fact, it seems pretty much equivalent to this JSON Batching mechanism that Microsoft has. I also use a similar mechanism with Elasticsearch's mget (batch query) API - one request from the webserver performs many search queries across a variety of indices, which php then parses and combines as-needed before being sent to the browser.
Sorry for surely cluttering the discussion here, but I hope that it provides a spark of inspiration in some way!
I would be hesitant to use IndexedDB, it's less reliable than Cache Storage, plus Cache Storage is meant for storing large pieces of data, so you may want to look into it
Thanks. I will experiment with both, but I strongly suspect idb is the way to go for my application. It supports more granular querying and Dexie makes it easy, flexible and robust to use. Idb also has larger storage limits and I believe longer persistence is possible as well.
I'll let you folks know if I can figure out this SharedServiceWorker via dexie thing. It might be a decent solution for chrome for android and other incompatible browsers.
Looks like the RxDB package already implements pretty much exactly what I had tried to describe - they call it Leader Election, which itself is an already-defined software pattern. They talk about it in their discussion of Multi-tab support as a potential solution to the lack of SharedWorker support in browsers.
The free version of RxDB works on top of Dexie, so looks like I'll be using that..
Hope this helps!
Awesome, I'll look into that
I just noticed this issue, I had started developing a ponyfill for this purpose (inspired by & links to this repo) using leader election and broadcast channels for browsers that don't support shared workers.
I did get it working but it could've been better; I'm re-writing it now and it should end up as more of a polyfill than a ponyfill so from a worker script's perspective, it'll be no different than a real shared worker with some caveats of course, such as if the leader tab gets closed, the worker will then get re-started in a different tab and that my implementation seemingly doesn't care whether the worker script is classic or module, either will work thus setting the type in worker options isn't really needed.
It is indeed much more complex and I ran into a fair bit of edge cases when first implementing it, haven't ran into all of those in the new re-implementation yet though. One being that in one browser (chrome? iirc?) if the window was minimized/not completely visible, that it would lower worker priority thus making timers execute much slower and initially leader election was based on a ping timeout algorithm to know if a worker was closed/stopped responding; with the re-implementation, I haven't seen that issue just yet though I'm considering alternatives if I can get them working reliably.
My personal use case was wanting to have a single shared websocket connection across all tabs; serviceworkers won't let you do this, sharedworkers are great for it but aren't supported in some cases and simply polyfilling with a dedicatedworker defeats the goal of having a single websocket connection (thus defeats the main purpose of even having websockets in a worker)
Not sure if there's any interest here in my implementation but I thought I would at least mention it for discussion purposes.
Oh, wow, yeah I'm interested. I was working on that very thing a couple months ago but I ran out of time and haven't been able to go back and try to implement this 😅
Oh, wow, yeah I'm interested. I was working on that very thing a couple months ago but I ran out of time and haven't been able to go back and try to implement this 😅
Yeah, definitely.. I've been working on it for the last few days and as I said, it can be a tiny bit complex; I did post my previous version to GitHub but I'm not sure if it was complete or fully functional, I know I was at least close to considering it finished enough to be usable but I don't think I got all the way there - https://github.com/Shaped/SharedDedicatedWorkerPonyfill -- as you can see in the README.md
, it was indeed inspired by your project here and I linked back to your project even.
My previous version had a few issues though and wasn't really a polyfill, more of a ponyfill I guess - actually, I had never heard of the term ponyfill until I saw it in your project with the conveniently included definition - and a few things were a bit odd about it.
One example being that it required the worker script to be implemented as a class that extended a sort of wrapper class. The worker parent class would just define some private values as well as having accessors for postMessage
and port
with postMessage
linked to a function in the worker shim that would handle posting messages to the appropriate place, such as the worker port if it's actually a shared worker, the worker port if it's a dedicated worker and the message is staying within the same tab context or a broadcast channel if it's a dedicated worker and the message needs to go to a different tab context.
Another example was that scripts were loaded weirdly; I can't recall the exact details but from memory but I think I had an issue in some scenarios using importScripts
to load the end-user/developers worker code itself in some cases either dependent on browser or whether the worker was a module or classic and/or maybe the polyfill itself was loaded as a module or not; again my memory on the exact reasoning is lacking but it had two loading methods..
For workers as modules, it would load them using a dynamic import (ie. an await import()
call) and then serialize their code with module.toString()
before posting a message to the worker context with the serialized code as well as the serialized code for the parent class as a string to the worker shim/wrapper itself which would then unwrap and evaluate it using const workerClass = Function(\
${workerParentSerialized}\nreturn ${workerSerialized}`)(with
Function()basically equivalent to
eval()` but slightly more aesthetic?) and then would instantiate that class in the actual worker context.
For workers that weren't modules, the worker shim would use importScripts
to load the parent class the worker was expected to extend into scope, I'm not sure how well that worked as reviewing the code now (not sure if same as on repo), it's kinda messed up but basically does the importScripts()
on the parent class but then uses Function()
to load the serialized worker code like above.
And - just writing that out was painful. It's terrible but it seemed to be what I had to do at the time to get things working regarding loading/importing scripts. Plus, requiring the end-user/developer to have their worker script be implemented as a class extending another class yet not providing any additional functionality above the polywantsaponyfilling/sharedworkeremulation was a bit silly as well. To be fair, I write mostly everything in classes and didn't initially expect to share the code so it wasn't really an issue for myself, but for general developers, yeah it's silly.
So I decided to start re-writing it from scratch. I haven't yet run into any issues with loading/importing scripts like before that resulted in the weird Function()
/importScripts()
/serialization workahackarounds, hopefully I don't, though I haven't tested outside of desktop Chrome and Firefox yet and I can't say for certain whether my current approach would've worked on the older browser versions that were out when I first started it; perhaps it would have and all that was unnecessary with me traversing the definite wrong path.
The plan now is to not require the end-user/developer to do anything special; any existing shared worker script should work fine. You can, optionally, have your worker export a default class which the shim will instantiate passing the port to the constructor, but this is an optional extra.
That will also allow it to be implemented as an actual polyfill, defining window.SharedWorker
if it doesn't exist, though I plan to allow it to be used in any case as new SharedDedicatedWorker
There are some options and configuration. You can set the interval between worker pings, the length of time for if a worker doesn't respond or ping to consider it timed out resulting in new leader election, the length of time to wait for a worker to respond when initially loading, whether to force usage of DedicatedWorker
s even if SharedWorker
is available, whether to force usage of the BroadcastChannel
whether or not a standard port is available, setting a namespace (similar to the name
property in the options object when creating a new SharedWorker(...,{options})
, setting a different namespace allows loading multiple isolated workers from the same script; the namespace is also used for the BroadcastChannel
), etc.
Then, basically it loads, determines whether to use new SharedWorker
or new Worker
depending on whether SharedWorker
is available or whether you've set forceDedicated
to true.
If it's shared, it will do a new SharedWorker()
on each tab it's loaded in, which will open a new port for each tab.. if it's not shared, then it will check if any workers are already running and start one if not and if so will enable communication to it via the a BroadcastChannel
The worker shim handles proxying things like self.onconnect
, port.onmessage
, port.postMessage
and port.addEventListener()
to route messages over the appropriate worker MessagePort
or BroadcastChannel
as needed.
As real MessagePort
s in workers and sharedworkers have a transferList
argument that allows a zero-copy transfer of certain types across contexts, if forceBroadcast
isn't enabled, you'll still be able to use transfers in certain cases, such as, of course, when using sharedworkers but also when using a dedicated worker in the same tab. If a port can't be used, then a transfer is impossible - then it will optionally either throw an error stating this or, if configured, will do a copy or structuredClone() instead of a transfer. Otherwise, if forceBroadcast
is enabled, BroadcastChannel
s will be used for all messaging between workers regardless of their type, context or context relationship with the message sender.
From the perspective of the script loading the worker, the result of new SharedDedicatedWorker()
(or new SharedWorker()
if polyfill is used) should look/act identical to the native version, with worker.postMessage()
and such also proxied as above in the worker shim.
So far the biggest difference I've run into in my new implementation is that for the native new SharedWorker(workerURL, options)
and new Worker(workerURL, options)
, the options
object has a type
property that can be set to classic
or module
to specify the type of the worker script. The actual call to create a new worker uses the shim script which is a module and when the shim loads the actual worker script, the way it's loaded, it doesn't seem to matter whether it's a module or not, so this one option probably won't be override-able. The other options are credentials
which is able to be overwritten if needed and is used when loading the shim and name
which also won't be override-able as the name is generated using the worker script URL and, if specified, a namespace - so you can have the same effect as setting options.name
by setting the namespace when using new SharedDedicatedWorker(workerURL, options, workerOptions)
, although when it's polyfilled to match the standard implementation it would be new SharedWorker(workerURL, workerOptions, options)
with the third argument being optional and probably using the workerOptions.name
as the namespace.. I think when using it as a polyfill, I'll have a separate class that defines globalThis.SharedWorker
wrapping SharedDedicatedWorker
and then you can either include the polyfill script or the SharedDedicatedWorker script as you deem fit.
As of now, it's mostly working as expected with both Shared and Dedicated worker types with leader election/timeouts/loading new workers when using dedicated if the leader tab closes. I need to finish all the wrapping/proxying to ensure messages go to the appropriate port/broadcast channel as needed, do my best to ensure that the worker doesn't see anything out of the ordinary (eg. avoiding polluting self
) when running, set it up to define SharedWorker
when used as polyfill and otherwise do some testing on other browsers like Safari Desktop/iOS, Chrome/Firefox on Android and I think also, IIRC, Firefox Private windows also disallow shared workers..?
I would be interested in any insights or feedback you might have being someone aware and knowledgeable about this. I'm hoping to get my new version in a state that is suitable for posting shortly, I'm happy to share it here once I do if you're interested and it will still be under MIT if you wanted to use it or steal from it ;-)
..and wow that message was way longer than I intended..lol
Another example was that scripts were loaded weirdly; I can't recall the exact details but from memory but I think I had an issue in some scenarios using
importScripts
to load the end-user/developers worker code itself in some cases either dependent on browser or whether the worker was a module or classic and/or maybe the polyfill itself was loaded as a module or not; again my memory on the exact reasoning is lacking but it had two loading methods..
So I was just poking around MDN, seems import
in workers only became a thing in Firefox version 114 released in June.
That explains why I had issues before and not now.. Now the question is whether I want to implement my previous work around for older Firefoxen.. Although FF isn't really a target needing fill anyway, though I thought for some reason SharedWorkers weren't available in private tabs but maybe that was just serviceworkers..?
hmmmm!
Since Chrome on Android still does not seem to support SharedWorkers, is there an established solution? It seems that they even removed the platform compat tests that require SharedWorkers. So I don't expect Chrome on Android to support this any time soon.
I was working on this, but I don't really have time to complete it
Just experienced the same issue and went down a very similar rabbit hole, but ended up resolving in a few lines of code just using Web Locks API inside the worker, it basically acts as a failover and has great compatibility for all common browsers, including Chrome on Android.
As it turns out, rxdb (which I mentioned above) uses weblocks for leader election, with broadcastchannel as a fallback.
Is your feature request related to a problem? Please describe. From the source, it seems that this library wraps a normal Worker in case there is no SharedWorker. However, a SharedWorker is only instantiated once per origin/url. Does this library handle this as well?
Describe the solution you'd like One tab could host the actual worker and the other tabs could be tricked into delegating everything to a
BroadcastChannel
, which the Worker also listens on.Describe alternatives you've considered None