Closed hugotiburtino closed 1 year ago
I could insert a script at the my local extension. If you want to see how I've done that, I can send you more details or we can schedule a small meeting.
If you wonder, why I'm so interested at the extension, it is because I've created a similar one, CSS Beautifier, and I'm having the same kind of concerns you may have
I think the main thing we’re trying to do with this if is detect whether we already rendered the markdown. Basically the script runs on a text file, and might be triggered more than once, in which case we shouldn’t run the renderer again. I’m not sure who or what could inject javascript, since the initial file is text. If an add-on does it, that means the user especially allowed it to do that (and we’re intentionally breaking compatibility).
The initial file were supposed to be a plain text, but there is a way to deceive the extension at this version. I've invited you @Cimbali and @KeithLRobertson to a private repo, where you can see how the attack could be done.
I see, we perform markdown rendering on html pages that masquerade as text files displayed by firefox (i.e. html pages with a single <pre>
tag).
In both cases (script in the header or within the <pre>
) it’s worth noting that Firefox runs the javascript anyway, with or without the markdown rendering, as you’re really visiting a HTML page that has some javascript. So we’re not doing anything that the user isn’t already doing by visiting that page.
Your suggestion is that we shouldn’t render pages that have javascript running on them because that risks fingerprinting, right? I think we could follow (some of) the recommendations from the MDN page to which you link, but I would rather see fixes that follow these guidelines:
Indeed, following some of the official recommendations would solve the problem altogether. Alternatively, one could take some features back (like the highlight themes, that is not anyway working right now).
I'm not sure if the extension shouldn't render pages that aren't real plain texts. I've decided for now for my extension, the CSS Beautifier, not to do that, but maybe the user wants to have it beautified, even knowing that it is not a pure md file.
My security patch #73 is just a momentary solution. It is a quicker than implementing one of the MDN recommendations, and it does not lead to deleting an existing feature. But a more robust solution is needed, you are right on that.
Here’s how we stand on the security guidelines currently:
[x] Don’t inject or incorporate remote scripts
[x] Ensure you insert remote content safely No remote content. We could improve the way we insert our own content though.
[x] Use XHR for Google Analytics No analytics
[x] Use the standard extension content security policy (CSP)
[x] Share objects with in-page JavaScript with care No sharing of JS objects
[x] Use window.eval() in content scripts with caution
No eval()
[ ] Create your UI with extension components
Create the UI for your extension using the built-in extension UI features, such as bundled pages, pageAction, and popups on pageAction and browserAction. Don’t add UI elements, such as buttons or toolbars, directly to web pages. If you do, scripts on the web page could compromise your extension. See Keybase Browser Extension Insecure for an example of the potential issues. If the standard UI components aren’t sufficient for your needs use iframes with data URLs to prevent fingerprinting, or add iframes to the extension code so a page can’t interact with your UI content, such as buttons.
Possibilities to adhere to this guideline:
I’m not sure we want to do this, as those are rather clunky styling elements, and the menu is rather discrete. Though those could be options for people who don’t like the menu. Furthermore, we need to modify the page anyway to do the markdown rendering, so I don’t think adding the menu makes much difference if we take care to not use fingerprintable elements.
I’m not sure what there is to win security-wise here, there is no sensitive user input as in the keybase example they cite. Basically:
[ ] Add eslint-plugin-no-unsanitized to ESLint
If you make use of ESLint to check your extension code, consider adding eslint-plugin-no-unsanitized. This ESLint rules plug-in will flag instances where unsanitized code from APIs or user input could cause issues.
We don’t use ESLint at the moment.
[x] Don’t inject moz-extension paths directly
When injected links, includes, or images include paths to moz-extension://{hash} a page’s tracking script could use this information to fingerprint the user, as the hash (UUID) is unique to the extension installation and, therefore, the user. The best way to avoid this issue is to follow the general advice about not injecting content. However, if you believe injecting content is your only practical approach, ensure that moz-extension paths are embedded inside an iframe using a data URL or the srcdoc attribute.
~I think this is the main thing to fix for us right now. Basically only happens with the paths to the CSS files.~ This is now fixed by inserting the content of the CSS files directly, see 42ce896.
[x] Ensure that third-party libraries are up to date
[x] Do not modify third-party libraries This might be a little contradictory if some libraries do not adhere to all of the above guidelines.
Closing this as v2 (#99) will have proper security, including rendering in an extension page instead of on an already opened page, and placing the rendered result in a sandboxed iframe.
Hi, I've noticed a security issue. The conditions to process the data are not enough to prevent a malicious site to insert an script at the head or at the < pre >. I've already fixed that and you are going to receive the pull request soon.