Open srvrguy opened 6 days ago
As a note for how the code currently works, based on my understanding:
If you are using a local version:
If you are using a remote version:
The proposed change is only in the jsonVersion() function and will only affect remote Chrome configurations. Additionally, it only is for the one call to confirm the version and get the WebSocket URL. It doesn't affect the actual PDF creation flow for remote instances.
About the only configuration that would break with this change would be if someone has proxied the DevTools connection and is relying on the Host header to direct calls to the proxied Chrome instance. I can't really see that someone would be running such an exotic configuration, so I feel the proposed change would be fairly safe to implement.
Hi, good your reporting back :)
I'm not sure I found the report you've linked here now myself already back then. But thanks anyway for mentioning this here.
converting to PDF uses shell commands
No. Chrome is launched as a sub-process. PDF generation still relies on a cdp connection, though the websocket address is identified based on the process' stderr output instead.
I propose adjusting the code so that the Host header is explicitly removed from the request. About the only configuration that would break with this change would be if someone has proxied the DevTools connection and is relying on the Host header to direct calls to the proxied Chrome instance. I can't really see that someone would be running such an exotic configuration
The reporter of the vulnerability in chromium also mentioned this and proposed to add a new commandline option --remote-debugging-hosts
which doesn't seem to have been considered.
But this is exactly my concern now, as to me this isn't such an exotic thing to do. It's about the only way to safeguard against this vulnerability while still having a chrome instance running in a isolated network: Accept requests originating from a customized whitelist of hosts and proxy them to chrome with the host header set to localhost.
And many of our users cannot install a chrome locally due to policy restrictions. Those have to install a chrome on a remote, less constrained, system. That's also the reason why I suggested in #71 to not spoof the header unless the error response matches this specific issue. As if the error response matches, the remote isn't a reverse proxy and we can be sure that spoofing the header doesn't break anything at all.
converting to PDF uses shell commands
No. Chrome is launched as a sub-process. PDF generation still relies on a cdp connection, though the websocket address is identified based on the process' stderr output instead.
Ah, thanks for the correction. I was just looking through the code quickly so I could get this posted. Either way, local Chrome is a different code path, so any changes made on the remote connection shouldn't affect that path.
I propose adjusting the code so that the Host header is explicitly removed from the request. About the only configuration that would break with this change would be if someone has proxied the DevTools connection and is relying on the Host header to direct calls to the proxied Chrome instance. I can't really see that someone would be running such an exotic configuration
But this is exactly my concern now, as to me this isn't such an exotic thing to do. It's about the only way to safeguard against this vulnerability while still having a chrome instance running in a isolated network: Accept requests originating from a customized whitelist of hosts and proxy them to chrome with the host header set to localhost.
There are two ways to run a Chrome instance with some security that I can think of:
I would expect option 1 to be more popular and option 2 only to be used if you need, for some reason, to accept connections across the public network where you don't want a direct connection to the DevTools service. Even then, the Host header wouldn't be required without configuring the proxy for virtual name hosting and having multiple systems proxied on that specific port.
And many of our users cannot install a chrome locally due to policy restrictions. Those have to install a chrome on a remote, less constrained, system. That's also the reason why I suggested in #71 to not spoof the header unless the error response matches this specific issue. As if the error response matches, the remote isn't a reverse proxy and we can be sure that spoofing the header doesn't break anything at all.
I personally don't think there is a risk of upsetting configurations, but I do see your point. I'll see about making it so it only tries changing the host header if it gets an error first.
I'm not a network professional, but I try to respect rfcs. The host header is mandatory and we have to support varying setups with a single implementation. So spoofing it or dropping it is, in my understanding, the worst thing we could do.
I also had a quick look yesterday how it is even possible to perform a request with Guzzle, without the host header, and this isn't as straightforward as it would be, if this is a common thing to do.
So, yeah, checking the error message and doing then something that's non-standard is better than doing non-standard things always.
That's an entirely fair position. I think that maybe the Chrome DevTools situation is an exception, but not at the expense of a change that can be handled in a different way that at least tries the compliant way first.
I've updated #71 with a fix that I've tested.
Back in 2018, Google added some restrictions to their browser to prevent a security issue with DevTools where a malicious webpage may be able to control the browser. This issue was documented over at https://issues.chromium.org/issues/40090537 and the patch implementing the fix is at https://chromium-review.googlesource.com/c/chromium/src/+/952522
Unfortunately, this change breaks this module if you're using a remote DevTools instance. While the traditional setup would be to install Chrome locally on the system running Icinga Web, this is not possible with the official containers. For a container setup, you would be running Chrome DevTools in a container as well, and then configure the remote address.
The changes to Chrome, however, make this tricky. The patch checks the Host header of the request and will reject the request unless the value of the header is an IP or one of the hardcoded options that would normally resolve to "localhost". While you can configure the module to use an IP, these are often dynamically assigned on container start by the container engine or Kubernetes networking. Using a service name is stable, but trips against the host header issue.
I had previously submitted a PR (#71), but this was not approved. I do understand since hardcoding a host header of "localhost" does potentially break things. It's not specific to containerized setups, as was mentioned there, since you can run a remote Chrome instance on a plain VM or even bare-metal if you want and don't want to rely on a static IP.
There is one other alternative, which I'm proposing here before submitting another PR. The comments on the Chrome patch note that it is expected that "non-browser automation software" will not send a Host header as a full browser would.
I propose adjusting the code so that the Host header is explicitly removed from the request. This should safely continue working for all setups, and will allow remote Chrome versions higher than 65 to function when the remote host is configured as a DNS name instead of an IP.
If this is acceptable, I can provide a new PR, or adjust my existing PR for the behavior I am proposing.