WICG / private-network-access

https://wicg.github.io/private-network-access/
Other
57 stars 22 forks source link

making things worse #61

Closed ray007 closed 2 years ago

ray007 commented 3 years ago

CORS is a server-side defined policy, voluntarily executed by the client, after the server has sent the data. And made opt-out instead of opt-in. So integrating data from several sites suddenly required extra work from website owners. All of this was sold to us under the label "security". Does any of the above sound like security?

And then now another policy is forced onto users / website owners without asking them. Who thought that breaking existing applications by pretending this is about security was a good idea.

I know this will probably be ignored and quickly closed, so /rant off.

mikewest commented 3 years ago

The mechanism here aims to require servers within a private network to opt-in to communication from public networks. Clients will send a specially-formatted OPTIONS request as a preflight, and won't send a credentialled GET/POST request until the server responds to the preflight with a set of headers that allow the communication. Does that address your concern?

ray007 commented 3 years ago

Not at all. It means once again all our customers will need to install new firmware on all their devices when using the publicly available version of our app. Because it suddenly stopped working on browser update.

CORS is one of the worst things someone come up with for web technology.

mikewest commented 3 years ago

Ah, in that case I guess I misunderstood your opt-in/opt-out concerns above. Yes, this proposal would require that servers running on private networks be updated to explicitly opt-into communicating with public networks. That does mean you'll need to deliver an update of some sort to those servers if you depend on such a workflow.

While I understand that this will require server operators to do some work, the status quo seems unsustainable insofar as servers inside private networks quite often assume protection from internet-based attackers that simply doesn't exist. If alternative mechanisms of protecting these servers exist, I think we'd be happy to explore them. This proposal outlines a compromise that we think protects users without being too onerous for servers, but feedback on that balance is welcome.

annevk commented 3 years ago

CORS is an opt-in relaxation of the same-origin policy. You are arguing as if it puts restrictions in place, while the opposite is true. It's true that in the specific context of local IP addresses part of CORS is being reused to put restrictions in place, but I don't really see how you can generalize from there.

ray007 commented 3 years ago

@annevk yes, you're right, the disaster starts with the same-origin policy. CORS was not a good instrument to deal with it.

Shouldn't the information which additional addresses might be allowed to contact come along the the source document instead of being pushed to the targets?

annevk commented 3 years ago

Why would we trust the attacker?

ray007 commented 3 years ago

Yes, it is a difficult topic. But currently, collecting data from several (dynamic) sources is already harder than it should be, and this makes it worse. And the browser not telling the caller why a call failed does not help...

annevk commented 3 years ago

Right, that's because the caller is the attacker. (When you develop though it should show up in the developer console.)

And yeah, security is not easy.

ray007 commented 3 years ago

Asking the user for permission to connect to other sites maybe? Seems to work fine for other stuff...

letitz commented 3 years ago

Asking the user for permission certainly could be useful, but it likely should not replace the CORS requirement. We can have both.

I see two main issues with an approach entirely based on permissions:

  1. It is not clear that most users can make an informed decision. The majority of users are not necessarily familiar with network concepts, and properly explaining the consequences of their decision seems hard in full generality.
  2. Granting permissions to non-secure origins is close to meaningless, since any on-path attacker can impersonate any website.
ray007 commented 3 years ago

We should really stop pretending CORS has anything to do with security.

letitz commented 3 years ago

Using CORS preflights here will improve security for both users and the local network servers who fall victim to CSRF attacks.

Remember that these preflights are sent before the actual request, and contain much less attacker-controlled data. If the local network server does not respond to the preflight, if it crashes, or if it responds with an error, then the CSRF attack fails. That's a security win :)

letitz commented 2 years ago

This issue has been inactive for a while, and I believe the concerns are addressed. Closing, feel free to re-open if you'd like to discuss further.

ray007 commented 2 years ago

The concern has not been addressed, but I seem to be rather alone in my opposition to this. So I guess I'll have to live with it.

falkTX commented 2 years ago

FWIW I care about this too and this broke existing applications.

Basically I have a device that exposes a private network over usb. Previously it was possible to talk with the device from a HTTP website (HTTPS is unwanted since it would require self-sign certificates on a local IP, too much of a hassle for users to setup).

From what I can see, the device never gets a preflight/cors option at all, Chrome simply refuses to allow a http://...com address to talk to a http://192.168.5x.x one.

The device was already handling CORS to validate the requests and ensure they come from the correct domain. Looking for workarounds for this atm, but not sure if I will find any...

letitz commented 2 years ago

Hi there,

Basically I have a device that exposes a private network over usb.

Do you mean that the private server allows interacting with a USB device? If so, you may be interested in WebUSB.

From what I can see, the device never gets a preflight/cors option at all, Chrome simply refuses to allow a http://...com address to talk to a http://192.168.5x.x one.

That's right. Chrome 94 disallows access from a public HTTP website to private IP addresses. Preflight requests are only an additional constraint for accesses from public HTTPS websites to private IP addresses.

Looking for workarounds for this atm, but not sure if I will find any...

Maybe the Chrome blog post can help?

falkTX commented 2 years ago

Basically I have a device that exposes a private network over usb.

Do you mean that the private server allows interacting with a USB device? If so, you may be interested in WebUSB.

No, I mean the USB device exposes itself as a ethernet device. Those CDC/CDM gadgets that can be enabled through the linux kernel (similar to Android USB tethering I guess)

The device provides its own webserver, and sets up DNS so that you can find it at a local http://192.168.5x.x/ address.

Because this is a local device, on a local network, setting up HTTPS is just not feasible. We want users to plug in the device, let DNS and zeroconf do its thing, and the user simply then has to go to http://mydevice.local (or the specific IP) and be happy.

The integration with other services we provide (for online communities) got broken with this update, because online pages can no longer talk to the local IP. And to be clear, this service runs as HTTPS page, it is only when communication with the device is needed that we open a separate HTTP-only window so we can send a specific request to the device. I have found a workaround for now, but know it is temporary until more things are blocked..

And I read the blog post, quite sad state of affairs that pages/services now need to beg browsers to keep things working :disappointed: Browsers should not be given this kind of power, I am glad that Firefox still works.

letitz commented 2 years ago

I see. Our answer here, as in the blog post, is to use WebTransport with server certificate hashes to connect to the local device securely. Until that is available, you can register for the deprecation trial to keep your service working.

falkTX commented 2 years ago

The webserver does not handle HTTP/2, much less HTTP/3. WebTransport is out of question as a matter of supporting older devices.

I would rather keep workarounds than give even more power to Google. It shouldn't be up to Google to dictate how the web works.

So thanks, but no thanks.

letitz commented 2 years ago

The webserver does not handle HTTP/2, much less HTTP/3. WebTransport is out of question as a matter of supporting older devices.

I'm sorry to hear that would not work at all for your use case. Is this a problem of deploying updates, or resource constraints on the device?

I would rather keep workarounds than give even more power to Google. It shouldn't be up to Google to dictate how the web works.

Indeed, this specification is being incubated here precisely so that others may have a voice in the process. Rest assured that it will not graduate to a W3C specification without other browser vendors being onboard.

falkTX commented 2 years ago

The webserver does not handle HTTP/2, much less HTTP/3. WebTransport is out of question as a matter of supporting older devices.

I'm sorry to hear that would not work at all for your use case. Is this a problem of deploying updates, or resource constraints on the device?

Deploying updates is not a problem, but these devices are not that powerful, all things considered. It is also a quite cost prohibitive to support HTTP/3. Updating the webserver core components would require updating quite a few other things too, it easily snowballs... And I am uncertain if the update would not cause side-effects, such as the device running slower due to newer standards being more complex and thus requiring more power and more CPU time.