Open wanderview opened 3 years ago
(taking a look at this with @mfalken)
This seems fair to us, since you can eval(str)
already. What's the use-case? I guess it's taking stuff out of the cache API?
Motivation is that perhaps we could also then treat extension URLs as local since they don't hit network. That would provide a path for extension service workers to do late import/importScripts, etc. It seemed like perhaps a principled way to solve the problem.
@annevk do you have an opinion on this? Would it be fair to treat an extension URL as "local" if its guaranteed to not hit network? For example, I note that file:
does not hit network, but its not considered "local". (Although maybe that's because file can in theory have a hostname?)
I'm not sure we considered file:
fully, but it can definitely hit the network depending on your environment, as I understand it. Will this lead to a pattern of folks using fetch()
+ this or maybe they are already doing that with eval()
?
For blob:
and data:
URLs this seems somewhat reasonable I suppose, but it does make the model a bit more complex. If that's only for extensions I'm not sure that's enough justification to also change it for the web.
One non-extension use case is loading script from IDB as a blob. Today you would have to read it all into memory and use eval, but importing from blob url would be more natural. I know of one major site storing script in IDB, although I don't think they need to import it in their service worker today.
I think the main alternative is #1585. That would support some form of import()
for the web, but require similar early initialization for extensions. It also continues the unfortunate behavior of "offlining" script resources that are in fact stored locally in the extension bundle (or blob, etc).
I know we could just throw this all behind a big "if extension" conditional and do whatever we want, but it would be nice if we could find a more principled alignment with the spec. This leads to fewer surprises for developers, less maintenance burden, etc.
It would be great to get the WECG to document/spec how WebExtensions upgrades relate to ServiceWorker upgrades. Is there an invariant that if a WebExtension ServiceWorker upgrades from v1 to v2 that there is a mechanism or strict ordering for the upgrade so that the v1 SW would always see the resources from the v1 WebExtension and the v2 SW would always see the resources from the v2 WebExtension? Or would implementing the local-only optimization create a problem where the SW's can effectively be out of sync with the "local" WebExtension resources?
If that's addressed, this seems like a reasonable proposal.
how WebExtensions upgrades relate to ServiceWorker upgrades
I can describe what happens currently in Chrome/Firefox.
All files in an installed extension (from the addon gallery) are frozen to their original installed content, it will never change, so AFAICT there is no need for the upgrade event in the service worker. That said, browser makers may want to offer a simplified hot reload for the service worker of a locally developed unpacked extension, but it's not implemented yet.
When the entire extension is updated (or the reload button is pressed on a locally developed unpacked extension), the browser simply uninstalls the old extension (retaining its data), and installs the new one.
One non-extension use case is loading script from IDB as a blob. Today you would have to read it all into memory and use eval, but importing from blob url would be more natural.
I'm not sure how this would work? We explicitly disallowed service workers from creating blob URLs, because lifetime of them would be even more confusing then blob URLs created elsewhere. For this particular use case it seems like the lifetime of them would be fine, but we'd have to re-enable blob URL creating in service workers for this to be possible.
@asutherland When reloading an extension, we first Deactivate the extension, which stops the old service worker. When Deactivate is finished, we Activate the extension, which installs the new service worker. So, v1 SW would always see the resources from the v1 Extension and the v2 SW would always see the resources from the v2 Extension.
Then it seems like letting local URL's be imported whenever is fine. I do wonder if it might make sense to be conservative and also define that the local URL must also be same-origin. This would mainly be to help side-step issues like extensions depending on the script resources of other extensions. (Maybe this is forbidden in practice by the underlying extension schemes, but there's no spec yet to clarify that.) This would forbid data URLs for late import as a side effect, but I'm not sure that's a bad thing, as it seems desirable to instead let import take Blobs or Responses or anything other than enabling createObjectURL or incentivizing data URLs (which can be a strain on devtools / introspection tools that don't optimize for multi-megabyte URLs).
I'm not sure I understand why we would need to require late imports to be same-origin. No other worker has that restriction. What is it about this situation that would require a same-origin restriction?
@ghazale-hosseinabadi is it possible for one extension to load URLs targeting resources in a different extension? Or are they isolated from one another?
Extensions are isolated from one another and it is not possible for one extension to load URLs targeting resources in a different extension.
@ghazale-hosseinabadi Is it possible for content to load URLs from extensions?
While talking with @mfalken about this issue we started wondering what service workers do for data URL imports today. It seems we have some inconsistencies:
https://sw-import-data-url.glitch.me/
Chrome seems to run importScripts()
for data URLs, but firefox returns an error. Also, it seems chrome does not actually offline the data URL since it gives an error on the next sw script evaluation complaining that its a late import.
Edit: But firefox does seem to support importScripts() of data urls in dedicated workers.
Thank you both for looking into that! As you are probably aware, Firefox stores ServiceWorker scripts in its Cache API implementation, and this likely makes data URIs subject to the scheme enforcement of step 4 of Cache.put.
@asutherland yes, please refer to https://developer.chrome.com/docs/extensions/mv3/content_scripts/:
They can also access the URL of an extension's file with chrome.runtime.getURL() and use the result the same as other URLs.
// Code for displaying
@ghazale-hosseinabadi, @asutherland, only files exposed explicitly via web_accessible_resources are accessible for the content scripts and web pages. By default, non-exposed URLs are not accessible from outside of the extension's own origin which is chrome-extension://\'chrome-extension://<id>/'
if that wasn't present in the parameter.
In theory local scripts would not need to be offlined in any storage, so firefox's use of cache API would not be a problem if we made an early exemption in the algorithm like "if local, do a normal importScripts()".
In theory local scripts would not need to be offlined in any storage, so firefox's use of cache API would not be a problem if we made an early exemption in the algorithm like "if local, do a normal importScripts()".
Agreed. If we're going to make an exception for 'local' always-available-offline-anyway URLs, then we'll bypass the map for them.
@mkruisselbrink
We explicitly disallowed service workers from creating blob URLs, because lifetime of them would be even more confusing then blob URLs created elsewhere. For this particular use case it seems like the lifetime of them would be fine, but we'd have to re-enable blob URL creating in service workers for this to be possible.
Is the lifetime of blob urls tied to the client lifetime or is it more complicated than that?
I agree it's a footgun if folks start creating blob urls in a service worker and expect to be able to use them in a page. I guess there's no easy way to create a blob URL that can only be used from the client that created it?
It's still possible to turn these things into data URLs, so it might not be a big deal.
I guess rather than encourage developers to use more blob URLs, we could somehow make importScripts(responseObject)
work (but I guess the synchronous nature of importScripts
creates a series of deadlocks here).
Just making data:
work for now seems like the simplest choice.
Yeah, continuing to not allow blob:
URLs sounds excellent and if people need them we should offer alternatives whereby the Blob
instance can be used directly. (Note that https://github.com/w3c/FileAPI/issues/153 will limit where blob:
URLs can be used as well, including disallowing messaging one to a service worker.)
if people need them we should offer alternatives whereby the
Blob
instance can be used directly.
Ahh yeah, that works around the deadlock issue with responses.
Oh, Response
is probably better to be fair and I'm not sure I see the deadlock issue. If there is one it would be there with Blob
too I imagine.
Oh,
Response
is probably better to be fair and I'm not sure I see the deadlock issue.
I think the issue is if Response
has a stream created in the same event loop. Since importScripts
is synchronous, and streams are asynchronous, you'd get a deadlock.
I guess importScripts()
is another API we don't really want to extend. 😊 Anyway, if there is a use case, we can find an alternative way of fulfilling it that does not bring along all the mistakes of the past.
Yeah, let's just stick with data:
for now (and the same path can be used for Chrome extensions) and see how far that gets us.
I'm confused. Service workers being prevented from creating blob URLs seems orthogonal to if importScripts()
can load a blob URL. For example, a window could create the blob URL and then postMessage it to the service worker.
I'm not saying there is a strong use case there, but the creation of blob URLs and loading of blob URLs are separate things.
See #1595 (comment) and w3c/FileAPI#153.
Yea, I missed that. I have to say I don't understand why we would introduce some new kind of isolation instead of just making blob URLs only readable by contexts with the same StorageKey. Left a comment on that issue to that effect.
Also, one of my goals with this issue was to minimize how many special exceptions we have to our general rules. Saying "local urls work like X" seemed like a simple rule. But saying "local URLs, except blob URLs, work like X" seems less principled to me and I don't really understand the motivation.
But even if we do w3c/FileAPI#153, why would we need something in service worker importscripts to treat blob URL specially? Couldn't the importscripts algorithm just operate on all local url equally and let blob url loading algorithm handle whether to fail or not?
Do we have consensus on this? To use the normal non-SW importScripts()
algorithm for local URLs? If blob URLs become blocked based on agent cluster, then they would fail to load.
Yeah, ok, my objection to blob URLs isn't particularly strong. A page and a service worker would be in a different agent cluster, yeah?
A page and a service worker would be in a different agent cluster, yeah?
Yes. A service worker is always in a separate agent cluster I believe. The only way that would change is if we allowed nested dedicated workers in a service worker, which we want, but have not done yet.
Ok, that sounds good to me. I'm happy with your proposal.
I'm fine with this. I think this does introduce somewhat of an edge case in terms of potential breakage via changes to web_accessible_resources (which are local but not same-origin as I understand it) mentioned in https://github.com/w3c/ServiceWorker/issues/1595#issuecomment-866947492. However, I think for Firefox our plan will be to unregister (or update) any ServiceWorker registration touched by a web extension in any way when the extension is upgraded or uninstalled, which side-steps the issue.
I don't think anyone from Safari/WebKit has weighed in?
@youennf any thoughts on this? The proposal is to allow importScripts(url)
at any time if url
is a local URL (data:
, blob:
etc).
Seems ok to me, though I am not a big fan of blob URLs in general. @bradeeoh might have some thoughts, since the use case seems related to web extensions.
My worry is allowing blob URLs to work temporarily and then later not being able to prevent them from working.
My worry is allowing blob URLs to work temporarily and then later not being able to prevent them from working.
Don't you already have that problem with SharedWorker
? A SharedWorker
can be in a different agent cluster and I believe blob URL works for importScripts()
there already.
It seems to me the best approach for your concern would be to get to consensus in w3c/FileAPI#153 whether to attempt agent cluster isolation or not. And if you want to attempt it, then you can get consensus there to not implement blob URL loading in service workers for now. But that seems orthogonal to whether to support late loading of local URLs in service workers.
But if you can at least get consensus to try then implementations could have a check for blob's in the service worker code with a TODO comment saying that this should be converted to an agent cluster check in the blob loader in the future.
I think in the end, though, this kind of restriction should be part of the blob spec/impl and not buried in some exceptional if-statement in the middle of the service worker spec/impl.
Well, but URL.createObjectURL()
is exposed in shared workers, so I think that is a different situation.
I do agree on what this should look like long term and if you want to block resolving this on https://github.com/w3c/FileAPI/issues/153 that would work for me as well.
I don't think we need to block on w3c/FileAPI#153. It looks like @arichiv is already investigating agent cluster isolation in chrome. So I think an implementation check and TODO comment pointing to that investigation would be reasonable to prevent volume increase for now.
@jakearchibald I am facing an issue with importing a local Firebase CDN file in the service worker (background.js) file. I can't seem to load other scripts or files locally from one file to another. Is there any way to achieve this?
I have tried importScripts, import, injectScripts, and using URLs, but none of these approaches seem to work.
file structure
firebase-app.js background.js `// Import necessary Firebase modules import { initializeApp } from './firebase/firebase-app.js'; import { getMessaging, getToken } from './firebase/firebase-messaging.js';
// Firebase configuration object const firebaseConfig = { apiKey: "YOUR_API_KEY", authDomain: "YOUR_PROJECT_ID.firebaseapp.com", projectId: "YOUR_PROJECT_ID", storageBucket: "YOUR_PROJECT_ID.appspot.com", messagingSenderId: "YOUR_MESSAGING_SENDER_ID", appId: "YOUR_APP_ID" };
// Initialize Firebase const app = initializeApp(firebaseConfig);
// Initialize Firebase Cloud Messaging and request permission to get the token const messaging = getMessaging(app);
// Request for Notification Permission and Get Registration Token const getRegistrationToken = async () => { try { // Ask the user for permission to display notifications const token = await getToken(messaging, { vapidKey: 'YOUR_PUBLIC_VAPID_KEY' }); // Provide your VAPID key if (token) { console.log('Registration Token:', token); // Token that can be used to send notifications } else { console.log('No registration token available. Request permission to generate one.'); } } catch (error) { console.error('An error occurred while retrieving the token:', error); } };
// Call the function to get the registration token
getRegistrationToken();
manifest.json
{
"name": "Test Suite",
"version": "1.0.1",
"manifest_version": 3,
"description": "This extension checks if the site being visited is harmful",
"background": {
"service_worker": "background.js",
"type": "module"
},
"permissions": [
"background",
"gcm",
"tabs",
"notifications",
"storage",
"contextMenus",
"declarativeNetRequest",
"scripting",
"activeTab"
],
"icons": {
"16": "icons/Icon(16X16).png",
"48": "icons/Icon(48X48).png",
"128": "icons/Icon(128X128).png"
},
"content_security_policy": {
"extension_pages": "script-src 'self'; object-src 'self'"
},
"action": {
"default_icon": {
"16": "icons/Icon(16X16).png",
"48": "icons/Icon(48X48).png",
"128": "icons/Icon(128X128).png"
},
"default_popup": "popup.html"
}
}
`
{ "name": "Test Suite", "version": "1.0.1", "manifest_version": 3, "description": "This extension checks if the site being visited is harmful", "background": { "service_worker": "background.js", "type": "module" }, "permissions": [ "background", "gcm", "tabs", "notifications", "storage", "contextMenus", "declarativeNetRequest", "scripting", "activeTab" ],
"icons": {
"16": "icons/Icon(16X16).png",
"48": "icons/Icon(48X48).png",
"128": "icons/Icon(128X128).png"
},
"content_security_policy": {
"extension_pages": "script-src 'self'; object-src 'self'"
},
"action": {
"default_icon": {
"16": "icons/Icon(16X16).png",
"48": "icons/Icon(48X48).png",
"128": "icons/Icon(128X128).png"
},
"default_popup": "popup.html"
}
}
https://github.com/w3c/ServiceWorker/issues/1595#issuecomment-2345526088 seems to be about browser extension service workers specifically. @oliverdunk, anything to comment from your end?
Thanks Thomas!
@jakearchibald I am facing an issue with importing a local Firebase CDN file in the service worker (background.js) file. I can't seem to load other scripts or files locally from one file to another. Is there any way to achieve this?
Without seeing your project running and the errors you are getting, it's hard to say anything with certainty. That said, I don't think the problems here are related to this issue.
As an example, there is a service worker specific version of the firebase messaging SDK (link). We don't have any Chrome guidance on making this work currently but there are some external blog posts you might find helpful, like this one.
If you have any other questions or follow-up, please use the chromium-extensions mailing list and that way we can avoid adding too much noise on the spec discussion :)
Hi @oliverdunk, thanks for your reply.
I am unable to load script files from one file to another. Additionally, the Firebase Messaging CDN JS file contains live URL APIs and Chrome is blocking the background.js file with an "unregistered" error status code 3.
My code setup includes HTML, CSS, JS, and manifest.json.
Could we allow late
import()
andimportScripts()
after installation if the script URL is considered "local"?https://fetch.spec.whatwg.org/#local-scheme
So for example, you could do a late
import()
orimportScripts()
to adata:
orblob:
URL.