ungoogled-software / ungoogled-chromium

Google Chromium, sans integration with Google
BSD 3-Clause "New" or "Revised" License
20.19k stars 817 forks source link

Reconsidering the removal of Safe Browsing via the GN build flag #1100

Open Eloston opened 4 years ago

Eloston commented 4 years ago

A continuation of the discussion here: https://github.com/Eloston/ungoogled-chromium/issues/1087#issuecomment-655728186

Right now, we need large patches to remove Safe Browsing after disabling it via the GN flag. This is a tedious and time-consuming process that causes subtle bugs in ungoogled-chromium.

Until #1087 becomes a reality, we have two options right now:

  1. Leave Safe Browsing enabled, but disable its requests to Google web services. We'd need a smaller patch to disable these requests instead of the large patches we have now. Also, any requests we forget to patch out should be caught by domain substitution. It may even fix some long-standing bugs we have right now. However, this will leave more useless (but not unused) functionality in the browser.
  2. Continue disabling Safe Browsing via the GN flag and maintain our existing patches.

Thoughts?

wchen342 commented 4 years ago

My concern is whether relying solely on domain substitution will leave unwanted connections leaking. It will need substantial testing before we can say it is safe to enable safe browsing and keep the browser ungoogled. On a side note, maybe we should have someone/some tool to monitor what connections UC will make for each release?

jstkdng commented 4 years ago

I'm still on the idea of having our own tool that emulates google server's. Maybe for testing we could modify all requests that go to google and just redirect them to that tool that would in turn return dummy data/empty responses. Effectively making safe browsing useless, and giving the users more control. We could even bundle that tool on each platform repo and have UC spawn that tool as its own process. Golang could be used as it is statically compiled and can be built to multiple targets/architectures.

wchen342 commented 4 years ago

I don't think that is a good idea. First it will be a burden to maintain such a tool, which requires at least a web server and multi-processing stuff. These things are known to be hard to keep secure. Another problem is this may be simple on Linux but not so much on other systems like MacOS and Android.

jstkdng commented 4 years ago

yes, it would be a burden, but we trade that burden so that we dont have to remove safe browsing integration, as google will only keep expanding it. Regarding the OSs, cant one just create a webserver and have it run on some port that doesnt require root permissions? Edit we could use the built in web server on golang. Gitea is a good example of that, it is a standalone binary with a config file with its own http server. Edit2: Could also be implemented with rust maybe, I have no experience with it though.

wchen342 commented 4 years ago

The cost of the two solution is not equal: at least for now patching out the safe browsing part is significantly easier than creating a web server. As long as the flag exists the patching won't be too hard to do. Also I did a quick search on Go and it can only run in native codes on Android, which means its functionality will be restricted greatly comparing to Java codes. It doesn't seem the maintaining will be easy because of NDK. I doubt Rust can even run on Android. Don't know whether it will work on Windows/Mac but I suspect Mac coud have problem too.

And still my most important concern is: by introducing a web server you are adding an unnecessary security risk into the system. One major principle in security is if you can do something the most simple way, than do not try anything fancy. A webserver without the right configuration and right protection can be a easy target for attackers. There is no reason for a browser to spawn a webserver process alongside with it unless it needs to provide functionailties like file sharing or something.

ghost commented 4 years ago

I don't know if this is possible but I think we just need to keep safe browsing disabled in browser settings without patching the whole code and remove the part of the GUI code where we can enable safe browsing and make it impossible for the user to turn it on. Even if it was turned on, the domain substitution won't let it connect to Google's servers.

EDIT: didn't find it in chrome://flags, I think it is impossible

Eloston commented 4 years ago

I think @wchen342 makes a good point about connection leaks, but it's not clear how one would go about testing such leaks. Obviously we should use the Safe Browsing source code to help identify connections, but that's still a lot of code to read.

Also while it's true that many connections to Google are caught by the use of trk: protocol or .qjz9zk TLD name in URLRequest, there are still a number of ways that this mechanism will fail to catch connections. With this reasoning, it seems that disabling the Safe Browsing component (Option 2) is still the better method. However, I'd like to hear the perspectives from those who have been maintaining ungoogled-chromium's patches as of late (which no longer includes me).

ghost commented 4 years ago

@Eloston Can we make an intermediate server between the user and google servers? I mean when the user's browser request if the website is safe, the packet goes to the server, then the server send the same request but with its own identity (ip, other info..) instead of the user's identity. After getting the response, it sends it back to the user.

wchen342 commented 4 years ago

That is only feasible when there are not too many users I believe. This is MITM (since most connections are https now) and some CDNs does this for convenience, but that is when you have a huge network to handle all incoming connections. Possible problems include 1) you need to have something that can handle high volume of concurrent requests 2) keep your server and certifications safe because the browser is no longer only trusting those from Google which means if the certification got leaked attackers can eavesdrop on all users of the browser 3) an IP pool is probably required since high volume of repeating reqeusts from one IP to the original server can get blocked.

ghost commented 4 years ago

Its seems like making a server is very risky and also costs money.

Option 2 is the best choice I guess.

jstkdng commented 4 years ago

The idea of the server was to create a valid request from that side, but if it will cause problems, then the dummy result can be inserted directly into the chromium source. Patch size won't change that much in that case.

ghost commented 4 years ago

My guess, %99 of UC users are already uses adblocker, script blocker, password manager etc... and check the URL's. I don't see a reason for enable safe browsing or another security features.

Option 2.