Closed jeremyn closed 8 years ago
I'm not sure it will be possible to make a general process on that. While I never ran the numbers to back this up, my guess would be that a (very) considerable part of the ruleset database accepts active mixed content blocking, so you would be disabling a huge number of perfectly fine rulesets to take care of what I expect on the other hand to be a very limited amount of edge cases.
I'm not saying we should go back and update existing rulesets. Let's assume they're fine because no one has complained about them (yet).
And contributors who want to add a single line to existing ruleset will have to rewrite whole ruleset? Great.
@galeksandrp I agree it would be annoying for a contributor who wants to make a small update to an existing ruleset to have the maintainer demand they fix a lot of old problems with mixed content. In that case I think it's okay for the maintainer to ask for the other stuff but if the contributor doesn't want to do it, then the maintainer should probably just accept the small update. It depends on how broken things are.
@jeremyn: But if that holds for previous rulesets, then it is also likely to hold for future rulesets.
@fuglede If we need to make a policy change on how we handle mixed content blocking or anything else, it shouldn't depend on whether we're willing to update the huge number of existing rulesets. Otherwise we would never make any policy changes.
My point is that since we have had a huge number of rulesets that result in apparently trivial mixed content blocking and a corresponding low number of side-effects, one would expect that a future ruleset which is subject to apparently trivial mixed content blocking will also have no side-effects.
Sure, there are examples of side-effects as you have noticed, and even if you have a strictly positive usability benefit, that would be at the price of a huge slash in how the add-in works. Again, I didn't run the numbers to back this up, but I wouldn't be surprised if, once you ignore the trivial rulesets where hosts 3\ redirect to HTTPS themselves, something like a third of the entire repository causes the mixed content blocker to trigger in one way or another.
In the case I originally reported (#5217), we were hiding a "Download/Donate/Purchase" menu item, but because the site uses responsive design, that only happened when the browser window was large enough. On another MCB site I looked at recently (#5232), the only apparent breakage was that their ads were hidden. I think that's a serious problem but it's unlikely anyone other than the site owner would complain. So MCB that looks trivial in one configuration might be a serious problem in another, and MCB might have a major impact on a site even if we don't get any complaints.
Do you think it's an acceptable usability trade-off that by having a permissible approach to MCB, we might be taking revenue from small sites?
Do you think it's an acceptable usability trade-off that by having a permissible approach to MCB, we might be taking revenue from small sites?
That actually sounds pretty harsh, sorry. I guess what I'm saying is that I think some forms of broken behavior are so bad that we should really try hard to avoid causing them, and since we can't reasonably tell how a particular site is broken, we should be more cautious with MCB for all rulesets.
Well, it's a completely valid question, and I acknowledge that there is room for a variety of opinions on the matter. Maybe you could phrase it in terms of what the user would expect the behaviour of the add-on to be, and here there is likely no clear answer either. Regarding the concrete case: If we can reasonably justify that all a given ruleset will break is the ads on the site, then yes, I think that is a perfectly reasonable sacrifice.
In the time I've been following the project, the concensus has been to preserve the given site's main function and otherwise not worry much about it; I went hunting for old examples on the matter, and #715, #2512, #2710 provide some of those.
Regarding user expectation: I've used HTTPS Everywhere for years and I don't think of it as an ad-blocker.
When it comes to ads, the user isn't sacrificing functionality, because the user probably doesn't want to see ads. Aggressive rulesets are win-win for them because they get increased security and no ads. The one making the sacrifice is the site owner who is losing revenue. So in this case we need to be especially careful talking what's a reasonable sacrifice because nobody in this discussion is actually losing anything by it.
In the ad example I gave before (#5232), the site was for a small FOSS-related project. You can imagine that visitors to a FOSS site are more likely to use HTTPS Everywhere, so if HTTPS Everywhere started blocking their ads, we might actually cost that FOSS project significant money.
Adding on to that, it would be helpful to know how big a part of HTTPS Everywhere's userbase also has an adblocker installed.
What I've done a couple of times (for Danish news sites primarily) is to reach out to the devs of the site in question. Many will claim that the main thing keeping them from upgrading connections by default is mixed content blocking due to ads, but more often than not, those ads can actually be secured.
Yea and although i prefer no ads i go through some big sites every once in a while to see if the advertising companies used there can be safely upgraded. Many support it, but the site owner says they still can't switch due to the rest of the advertising companies that don't. They could work on their internal requests for the time being, but even that doesn't occur. Their own subdomains for css, js, images could very well be secured but aren't..
@fuglede i suspect a large overlap. People that care about their privacy are very likely to have both an adblocker and https-everywhere.
After more thought and looking at more rulesets, I don't have a good answer. I don't want to effectively disable a ruleset for a large site just because of some ads. On the other hand I don't want to invisibly block ads for small sites that might depend on them. (I'm fine with ad-blocking generally, but not when it's unexpected like from HTTPS Everywhere.)
Having official guidelines would help so it's less of a judgment call and so we have something to point to if a site owner complains about why we blocked their ads but not someone else's. HTTPS Everywhere shouldn't been seen as engaging in favoritism. It's worth noting that EFF's "adblocker" Privacy Badger is unusual in that it looks at site behavior instead of maintaining a list of sites to block. It would be good if our approach to mixed content policy was based on objective behavior instead of the subjective opinions of the people writing and reviewing the ruleset.
Thanks for the discussion, this is definitely something that falls into a grey area. I generally fall into the anti-ad camp, but this is also not the stated mission of HTTPS Everywhere. However, if users are not protected by HTTPS Everywhere just because we've disabled the rule for breaking ads, that's a bad situation too. Generally, I agree with @jeremyn that when I consider the "feature breakage" criteria of our rulesets, I don't consider ads to be a feature of a site.
That being said, if ads are blocked due to MCB, it adds an extra incentive to ad networks to make their ads available over HTTPS, which is a good thing for the web in general. Major sites such as http://www.nytimes.com/ downgrade their users to HTTP simply because their ads, provided by ad networks that care absolutely nothing about users security, will be blocked by MCB if they make HTTPS available to users. Here, not only are the ad-agencies providing a vector for malware infection, they are effectively preventing users from browsing securely. So I don't have much sympathy if their ads are blocked as an incidental consequence of providing strong security for users.
If I was forced to provide a steadfast rule for this, here's what it would be: if a ruleset breaks ads, so be it. The site maintainers are welcome to submit PRs themselves which disable the rule. If they are unwilling to do this, it's probably not worth worrying about. It's not our job to ensure that sites' revenue streams are uninterrupted, but it is our job to make sure users are being provided the best endpoint security they can get without disrupting functionality.
Can this be closed?
Sure.
@fuglede @Hainish @J0WI This issue is for general discussion.
My opinion is that if any mixed content blocking (MCB) happens, it needs to be marked either by disabling the target or marking the entire ruleset with
platform="mixedcontent"
, depending on how many targets are affected. By "any mixed content blocking" I mean that if Firefox lets you unblock mixed content, then that target should be marked.This seems harsh but some motivation is that in pull request #5217, I pushed back on MCB and the contributor @galeksandrp made the valid point that the MCB was only a font and some search branding, which seems minor. However it turned out that this MCB hid an important menu item in a common browser configuration. So even seemingly minor MCB can really break a site, and because a maintainer can't reasonably check all pages in all configurations, we should err on the side of usability.
What do you all think?