WICG / webusb

Connecting hardware to the web.
https://wicg.github.io/webusb/
Other
1.3k stars 126 forks source link

Why not sign the code? #191

Closed anga closed 3 years ago

anga commented 3 years ago

I really like the idea of WebUSB but I'm really concerned about the security of it. Before giving my opinion, I want to clarify that I'm not a security expert, so, probably my idea may sound stupid.

I read that to gain access to a device, only a permission prompt is going to be displayed, but that allows us to malicious code to be requesting access to a device. That is safe only if the user really knows what is doing, but someone that is just browsing the web is going to say "yes, whatever". That security layer is like putting a door to a field IMHO.

So, why do not sign the code using the same SSL pub/private keys? so, when you want to run some js that requires USB access, doing it like:

sandbox_js = WebUSB.requestAccess("https://samesite.com/usb_app.js.ssl")
sandbox_js.run()

NOTE: the code is just mock code

And then, a prompt with something like "Foo Bar LLC is requesting access to your USB device". Also, if the signed file does not match the SSL in the site, consider it as an unsafe source. In this way, only signed code can request access to a USB.

Ok, now you can close this ticket :)

septs commented 3 years ago

I think you can propose your proposals, to WICG review.

Start your standardization work!

anga commented 3 years ago

Thanks for your answer. I have 2 questions. The first one is the most obvious, It does not sound really stupid the idea? the second one, how can I do that? I never did a proposal

septs commented 3 years ago

https://www.usb.org/sites/default/files/article_files/USB_Type-C_Authentication_PR_FINAL.pdf

Promote to WebUSB maybe good.

reillyeon commented 3 years ago

The existing requirement that script accessing the API be delivered over HTTPS means that you get effectively the same protection that you propose. To take Chrome's implementation as an example this is what the prompt looks like:

Screenshot 2020-09-22 at 12 15 22

The domain shown is cannot be spoofed by a malicious site. Sites can opt into stricter checks to prevent the injection of malicious code by deploying stricter CSP checks, such as requiring all Javascript resources to match a nonce or hash.

In addition the dialog is designed to discourage users from "yes, whatever" reactions to prompts. Notice that the "Cancel" option is selected by default and the "Connect" option is disabled. To actually grant the site permission to access a device the user must first click on the device and then click "Connect". The dialog is also dismissed automatically if they click anywhere else on the screen.

anga commented 3 years ago

I agree that put the js behind HTTPS has the same protection if the site mantainer verify all codes and has no bug. Maybe just requiring that the WebUSB to be a file and not in pure DOM may be enough. But anyway, I don't agree that the security must be "trusted". The problem is where is the limit. For me, the limit is not the site code or devs in this particular case, where a site can potentially read my device if I though that the malisious code cames from the site devs and not from a malisious user.

What I'm trying to say about "yes, whatever" is that if some site usually request access to a USB, and the site contains a malisious js injected, the end user may think that the request comes from the site, and not by a malisious user. But if the code is required to be loaded and not inside the dom, that issue may be solved.

Probably I'm missing something that is really clear for you.

reillyeon commented 3 years ago

As I mentioned Content Security Policy (CSP) is the solution to malicious script injection. WebUSB currently uses the "secure contexts" definition but I think there have been proposals to introduce stricter versions ("securer contexts"?) that require things like strict CSP.

anga commented 3 years ago

Yup, that makes much more sense (the "securer contexts").