Closed Thorin-Oakenpants closed 5 years ago
PS: reading this: https://medium.com/privateid-blog/privacy-inequality-the-most-brutal-form-of-inequality-youve-ever-imagined-e674d4f3cd42
I'm gonna be one of
The Protected
, not one ofThe Predictables
- but it will still hurt: i am sure that a lack of data points will be held against people (getting a loan, insurance rates etc). It may pay to start curating a public ID of all things nice, sugar and spice....
SCS - the Chinese Social Credit System - coming soon to a country near you! - actually it's already here - many have been canned at work because of social media
riskified.com prevented me from making a purchase at one of their clients' e-stores, one reason being that my e-mail address was unknown, and another because they were unable to find an on-line presence
EDIT: and THANKS for HTTPZ claus! perfect
Nosedive edit: in case that's not clear, it's to a wikipedia article about a Black Mirror episode (not a nosediving :cat2: or hampster gif)
Media.com's article, Privacy Inequality: The Most Brutal Form of Inequality You’ve Ever Imagined If I happen to maintain my position among The Protected, if I happen to not be already amid The Predictables it's because I'm aware and behave accordingly, those two conditions are required.
Being aware: is this a grace, a faculty of intelligence, a product of one cultural environment? Acting accordingly: this is obviously the fact of will, the refusal of defeatism.
Many ignore the value of their privacy and how it's being invaded, others know but don't care, others care but are defeatist.
Several parameters for several scenarios. Privacy is certainly not a cultural phenomenon, I believe it is a natural component of ethics and psychology essentials', fundamentals' requirements. But at the same time some consider their private life in proportion of cultural standards which leads to, i.e. people talking about their very private life on talk-shows for instance, as if they considered that if others do it, why not them: the sheep syndrome which is a high-risk factor: manipulation is also getting people to believe that everyone else except them does it, so why would they be an exception?
Let us think by ourselves, to start with.
Social Credit System
Please stop calling it this way. This is not a kind of a credit a moneylender gives to people who ask for it. It is a kind of credit a robber in a dark alley gives to its victim when requesting to go to victim's flat in order to take the money stored there too, in exchange for some low probability of victim being saved, and this kind of "credit" is usually called not credit, but robbery and blackmail. Black must be called black.
Social Credit System
Please stop calling it this way.
i didn't - the Chinese government did - i think you misunderstood what i wrote
i'm quite aware of the surveillance taking place in so-called "developed" countries and how it affects society
i didn't - the Chinese government did
It doesn't matter who have started to call it this way first. What matters is that, it is incorrect to call it this way, even if you are only repeating the said by someone.
OMG ... stop calling it Facebook .. its not a book :trollface: :)
If you care to read all 21 (they're short) .. Rick Falkvinge's posts re-enforce what was said about kids growing up with no sense of what privacy is, and surveillance is the norm. FFS, people fought and died for privacy.
And where was Winston .. they couldn't see him... but they knew where he was .. he was in the one place they couldn't see him .. so they knew exactly where he was ... and no, I have never read 1984
I have one or two addons in mind, but really don't have a heart to ask, since I know you are busy with other stuff.
I can't promise I will be able or willing to make them, but I don't mind you asking :)
@StanGets thank you for pointing this out
Coloring the urlbar background given the site's security status is easily performed right from the user's userChrome.css file thanks to a few lines, available i.e. with Color your URL bar based on website security
I have played with it after your post and if you interested, here is the result https://github.com/crssi/Firefox/blob/master/userChrome.css
Cheers
@claustromaniac
Thank you. :heart:
I was thinking about a "fork" of TC, which would be less complicated (if possible even not configurable) with some functions stripped off and some added. But I know this would be a complicated project. If you are interested, I will dump more details.
Cheers
@crssi ,
I have played with it after your post and if you interested, here is the result https://github.com/crssi/Firefox/blob/master/userChrome.css
Nice work. I copy, report while others enhance... c'est la vie (my universal answer for laziness! .. but mainly for a great of knowledge in the coding area!)
I was thinking about a "fork" of TC ...
interesting - because you're thinking about this, i assume the API isn't going to see the addition of the functionality necessary to clear all storage per-domain any time soon?
Didn't assume anything, but until TC works I really don't care about those APIs, since all the browsing done here are in a temporary containers, so all the storage's are destroyed sooner or later.
not sure you understood what i wanted to say ... what i meant was, if the API is expanded in the near future to include removal of all storage per-domain, then wouldn't FPI be enough without containers?
if not though, then i'm certainly interested in your container idea
^^You already follow the topic https://github.com/ghacksuserjs/ghacks-user.js/issues/395#issuecomment-453128207. I learned there that FPI still breaks a :cat: :shit: (cross domain logins..., but I haven't observed any... yet). So, with a "correcly" configured TC (and usage), we can still get similar "protection" even with FPI disabled and without breaking cross domain logins.
@crssi @claustromaniac
re: coloring identity box/url bar based on status
the 'Librefox HTTP Watcher' add-on didn't work for me - it colors both the search and url bar red and leaves them that way for ever more after, for instance, you load an about: page
if you guys have this problem, or are looking for an alt solution, see CustomCSSforFx and look in /classic/css/locationbar/identitybox_colors.css - he's coloring the ID box (only) differently for every available status apparently
@atomGit I am using now my custom css, see https://github.com/ghacksuserjs/ghacks-user.js/issues/492#issuecomment-453474612 But upper link is a great resource. :+1: Thank you
ADD: What a karma: Just stumbled now to additional customization resource at https://github.com/FirefoxBar/userChrome.js-Collections-
@atomGit thanks for spotting that specific css. Aris-t2's /CustomCSSforFx is a haven of most valuable CSSs for tweaking Firefox's appearance, either used such as either, as I do, selecting here and there a given css we'll then paste into our userChrome.css (beware then of dependencies when applicable). Let's not forget that is is Aris-t2 who had developed the famous Classic Theme Restorer dedicated to restoring the old look to the Australis Firefox back when legacy add-ons were still the rule ...
indeed - CTR was something i could not and would not live without - ShadowFox is another nice theme - this is a dark theme that this takes much of the bite out of the 'white flash' syndrome when you load a new tab
@crssi,
I was thinking about a "fork" of TC, which would be less complicated (if possible even not configurable) with some functions stripped off and some added. But I know this would be a complicated project. If you are interested, I will dump more details.
Actually, there are some things about TC that I would like to change myself, and I don't use like half of its features or more. Realistically, I don't have the time for a project of such complexity for now, though. :crying_cat_face:
@atomGit,
@crssi @claustromaniac
re: coloring identity box/url bar based on status
You didn't mean to @ me there, right?
@claustromaniac
Actually, there are some things about TC that I would like to change myself, and I don't use like half of its features or more. Realistically, I don't have the time for a project of such complexity for now, though.
Understandable, when (if) you will have a time and a wish, we can do a little brainstorm about features. :cat:
Cheers
https://github.com/april/certainly-something is an extension to view information about the current state of your HTTPS connection, using the TLS Info API implemented from Firefox 62.
Maybe it's an active alternative to the legacy extension SSleuth?
FYI: https://www.zdnet.com/article/websites-can-steal-browser-data-via-extensions-apis/ - ignoring the obvious and clickbaity stuff .. it;s just FYI. There's an online tool you can paste the content of an extension's manifest.json file. So 16 extensions out of 9391 tested.
AFAIK Mozilla are going to or thinking about moving to a curated
list at AMO - whatever that exactly means. It was listed in an roadmap/all-hands doc I saw
FYI I've just added an issue on Mozilla Addons' GitHub repository concerning Webextensions in general : Webextensions don't respect cookie policy
i'd appreciate your comments on this.
^^ let me know the bugzilla number that comes from this
edit: I suspect that this is something that will be resolved from another bug, possibly the one's that stemmed from #489 .
What you are talking about is behind-the-scenes: that is the request doesn't have a "tab" with a domain in it (sorry for my shitty description). And this happens in privileged content - such as extensions installing/updating scripts/filters/lists (stylish, violentmonkey, uBO, uM, etc), extensions getting updated, background checks for FF updates, FF pings, as well as actual pages: e.g about pages, chrome://, activity stream homepage... etc.
I'm not an expert on all of this (the inner working of how Firefox handles requests and the chain they go thru), but I think this has something to do with an urlClassifier they keep talking about, and they have been moving to make sure everything (SB, TP updates, etc) and I mean everything goes thru this first, so that DNT is honored for one thing, and other obvious bits and bobs.
Anyway, let me know the bugzilla number
Concerning bugzilla I'd have to open an account, not that I can't but i admit that I always considered bugzilla as reserved for pros: just reading some reports there makes me believe a strong technical background is required to validate one's descriptions, arguments and so on. But I'll think about it. I believed that Mozilla Addons' GitHub repository would handle the issue...
the request doesn't have a "tab" with a domain in it
This gives me a path to start conceptualizing the reasons of the mighty power dedicated to Webextensions but IMO whatever privileges they shouldn't prevail on Firefox's default settings such as blocking cookies when blocking cookies is the general cookie behavior a user has decided. And why are cookies blocked (referring to your analysis) if I explicitly make a blocking exception? No tabs either in that case. I'm puzzled and bugzilla indeed seems the place to go.
If I get this issue on bugzilla I'll of course report here the number. I admit being aware (I mean; frightened, let's admit it) of a lasting trial-like debate on Bugzilla where my lack of technical skills could possibly end up by a refusal of Mozilla devs to consider my request on the ground of explanations I wouldn't understand: I think I need a good lawyer!
Thanks @Thorin-Oakenpants
bugzilla for pros only .. not really. The only rule for filing bugs in my experience (been around for almost a 100 years) is to always provide STR (steps to reproduce) - then it is up to triage in this case.
What you said in the linked mozilla addons repo is fine - has STR, expected results, actual results. Just copy/paste that in with an app apt title. I think this is a very important bug to file. And I don;t expect you to be able to find an existing bug - I don't even think I could.
OK, I've filled the bug at 1525917 - Webextensions don't respect cookie policy
We'll see, first if it's accepted, secondly the answers i'm anxious to discover.
Thanks @Thorin-Oakenpants for providing that push I needed :=)
Extensions that use the Cookies API declare this behaviour in the manifest file.
^uBo uses uses <all_urls>
Permission to access any protocol and domain.
Hah .. they bumped it to P1 - maybe a decision is pending soon
The problem I have is that it allows third party cookies, which are not required for an extension to have it's own localStorage, IDB etc. I have no idea if they will WONTFIX this
IMO, since they made WE calls privileged content (legacy extensions such as uBO could see these, WE's cannot), then there is no control over rogue extensions doing all sorts of shady things - but they could do that anyway without the need for third party cookies AFAIK (I can think of a dozen evil things)
I'm not too worried about it, because we vet our extensions, and they come from respected devs - that said, I was surprised by this, but I do remember it being mentioned during the buildup to quantum - just a shame there's no way for an extension to monitor other extensions
we're going to need robots to service the robots who service the robots...
Indeed modifications have to be feasible and meet Mozilla's agreement : not sure which of the two is the hardest challenge.
As I see it Firefox's storing is the culprit of all these complications
First, the very concept of allowing sites to store anything else than cookies: why? Don't tell me it's for bandwidth, then why? Is it that some Firefox features require that sites be allowed to store data in the user's profile? localStorage, IndexedDB ... why?!
Secondly, concerning IndexedDB, why is it that Webextensions store their date in the same user's profile sub-folder as sites, [PROFILE]\storage\default\
?
Why not two different policies and storage folders, i.e.
[PROFILE]\storage\extensions\
[PROFILE]\storage\websites\
I don't know if it's only me but the candid view of a novice is that the general idea would be that the concept is to ignore the distinction between data originating from the browser and data downloaded from sites, integrating both.
I don't know if they can even distinguish between webextension calls versus other ones (like mozilla's internal ones vs web pages etc)
Yeah. Isn't this simple question relevant of the chaotic, unclear, messed up implications of the very structure of Firefox storage? The issue I mentioned on BugZilla, naively as a non-techie, stirs the inconsistencies of Firefox's very build, IMO. There's something wrong, maybe not in the code (how should I know?) but in the way things and especially storing have been thought.
Mozilla, as all developers i guess, aims to make complicated things appear simple to the user... but no one up to now has been able to combine what is incompatible and trying to do so leads to problems such as Webextensions laying cookies of sites they call even when the user has chosen his cookie behavior to block them ALL.
Wait and see ...
First, the very concept of allowing sites to store anything else than cookies: why?
Because they have valid use cases. It's part of the standards. But they control it thru the cookie permission - a site doesn't have to actually set or use a cookie, it just needs that permission.
Why not two different policies and storage folders
Because that would create twice the work. You don't need to separate the location of the data files, you simply tag the data with an appropriate flag. e.g web extension data is already definable by the fact the folders start with moz-extension
. Same reason we don't have separate files for 1st party cookies vs 3rd party. Or separate files per domain.
rah rah
I'm not an expert on the internal workings of FF, or even the correct terms to use. The key issue for me is that they can differentiate extension (non web page) calls vs Mozilla internal ones vs actual web pages. I think they call this a principal
- i.e what is calling the shots?
And this is where it's too complicated for me, I'd have to be Moz dev working in this area to know enough. Just because the principal may be an extension, doesn't mean it isn't an open web-page (but they can probably know it's meant for an open tab/web page). And I don't know what they cal this either, I'll just call it a target.
I look at extensions that use logins: lastpass for example. It's NEEDs 3rd party cookies AFAICT, because, without a webpage, within the extensions options, it can log into your lastpass account online, and get your data for local display and editing etc.
But yes, there are inconsistencies: as clearly seen that extensions don't obey cookie rules. If they did, then I see a lot of breakage, and users would have to open the manifest of each UUID and apply site exceptions.
The cookies/site data user interface is all about web sites. The old UI used to say something like Accept cookies from websites. It is not meant to be about extensions. Imagine if a user decides to block all cookies and suddenly extensions break, we'd be back to this which is not a user friendly acceptable solution.
It really comes down to the Moz devs on how they want to classify this. Yes extensions can do web calls, and yes, they need a 1st party cookie to store their options, and probably 3rd party in a lot of cases, in order to work. And the current solution is probably the best - loo, extensions can do lots of things, a cookie is not the worst thing in the world. We have to vet extensions anyway because they are so powerful.
Note to self: I wonder if the cookie & related storage (e.g easylist.to via uBO updating filters) is assigned an Origin Attribute?
The very concept of allowing sites to store anything else than cookies
Because they have valid use cases. It's part of the standards.
There may be valid use cases (I linger to know which ones) but meanwhile they allow sites to lay data in my IndexedDB storage folder without the reason being obvious to me.
If I've set the cookie behavior to block all by default with exceptions set by me it is because of what allowing cookies leads to: why does bostonglobe.com lay data in my IndexedDB, why does youtube.com as well? Maybe we're in the same scenario as plain cookies where sites use them even when not required: a valid feature abused by some, many websites.
I just do not want and do not accept a website downloading data in my browser profile without my explicit authorization. Hence blocking all cookies by default but more: should I be interested by a site to the point of accepting its cookies that I'd forget that site should allowing its cookies extend to that site creating its indexedDB in my profile. Same with localStorage: no persistent cookie (Allow exception) for a site that lays data in my localStorage (webappstore.sqlite) which I clean anyway once Firefox is closed.
In other words, yes for basic cookies, no no and systematic no to persistent localStorage (never persistent with session cookies) and ultimately no to indexedDB in my profile for anything else than that of the Webextensions i've installed.
What we all observe is that a browser includes features which are meant for the best user experience (I do not have in mind developers aiming to trick users on the basis of industry lobbying requirements, no conspiracy theory) and that these valuable features are exploited by some websites for their own profit and not for the user's advantage. Hence the amount of defense mechanisms developed by users, by Ghacks-user.js itself to start with, aiming at preserving the best and controlling maybe not the worst natively but the worst as what sites occasionally do with the best.
And this is only the beginning, IMO. Whatever the best intentions the trend is and remains digging into users' life. period.
Yeah look. They are valid mechanisms, but like everything else, they get abused.
Personally, I have had cookies blocked by default for the last 5+ years. I've allowed sites I log into a cookie (about 10 sites over the years, currently just four), I've allowed about 6 more sites a session cookie to function (and blocked them in uMatrix in headers). That's about it. I've never seen any persistent localStorage or IDB shit in all that time, and the web works for me just fine. Just think of the tens of millions of things I didn't have to clean up or have tracking me
That's what I do as well. I just happen to find this way of proceeding a bit radical. Anyway, "no woman no cry" and "no browser no cry". We love 'em both.
@Kraxys said
Just discovered the FF addon Site Bleacher. It automatically clears coolies, local storage and indexedDB as soon as the les tab relative to a domain is closed. Far less options than CAD but Site Bleacher clears indexedDB, what CAD can't.
I said
there is no extension API available to clear IDB by domain
and Indexed DBs are removed when first site tab is open, so sites are unable to access data from previous session
if I read that correctly, it says IDBs (plural) are removed when you visit your first website after opening the browser - see the words "from previous session". I don't think English is the author's first language - that sentence is very unclear.
You can already do this, btw, by clearing data on close
@crssi said
I read it as when fist tab of specific domain is opened the IDB is cleared for this very domain, which makes sense
I said
Why don't you test it
@crssi said
Well, yes and no. If WE injects a code to delete IDB into the visited site, then it can be deleted. I assume that the "Site Bleacher" does that and deletes IDB if this is the first tab to open that particular site/domain.
I have tested the Site Bleacher... not that I need it, since I am using TC for that purpose, but Site Bleacher is successful in deleting IDB.
Did you test it?
Did you test it?
No. I don't have time. I'm not an expert, but I have questions about this
I've installed and use this Site Bleacher Firefox extension in place of previous ForgetMeNot. If Site Bleacher indeed wipes a site's cookies and localStorage (unless whitelisted by Site Blocker) once the site closed, it fails to remove any IndexedDB data set by that site. I've tried with my two testing sites which both lay their IndexedDB when cookies aren't blocked for them, bostonglobe.com and youtube.com and in neither case were their IDB cleaned up, unfortunately.
But Site Bleacher does the job perfectly well for cookies and localStorage, and weighs only around 50KB I think.
once the site closed, it fails to remove any IndexedDB data set by that site
It (supposedly) removes it when you open the site (but not subsequent pages if you already have tab of it open)
It (supposedly) removes it when you open the site (but not subsequent pages if you already have tab of it open)
If I remember correctly, but I'd have to test again, it may have removed bostonglobe's IDB but I'm sure it didn't remove youtube's IDB. There's also something special with youtube' IDB foler name which ends with something like '3rd-party' ...
I'll test again right now. Stay tuned :=)
My 1st comment confirmed : neither bostonglobe nor youtube have had their IDB wiped by Site Bleacher.
Usually, default, my network.cookie.cookieBehavior = 2 = block all For the test I've set network.cookie.cookieBehavior = 1 = Block third-party cookies
Opened bostonglobe.com and IDB folder name was:
https+++www.bostonglobe.com^firstPartyDomain=bostonglobe.com
Opened youtube.com and IDB folder name was:
https+++www.youtube.com^firstPartyDomain=youtube.com
=> Neither wiped by Site Bleacher.
I use in fact Site Bleacher in order to create the 'Allow Temporary Cookie' which was handled by some legacy add-ons, that is accept cookie when on site then remove, which was the 3rd option afer what we know of session cookies and allow cookies (which remain after FF restart).
Global cookie policy : block all Exceptions: Allow and keep (3 sites) or Allow for session (several because they require cookie authorization to display correctly (i.e. userstyles.org) either because I want to log in.
For these exceptions I don't want the cookie to remain once i've quit the site => Site Bleacher or ForgetMeNot. I know these cookies (set with network.cookie.lifetimePolicy = 2 = session) would be removed once FF closed, but I just don't like cookies following me even within the session only when in fact I needed them only for visiting the site. Mastering the scenarios at 100%
@Thorin-Oakenpants I have never said that WE injects the code. I have used IF. Author should tell us or we should peek into the code to be sure what method is used.
The question was, does this WE clear IDB. And my tests shows that it does. I didn't comment that the method is good and what implications are behind, since that was not a question at start.
@StanGets The fact that the IDB files are there, does not mean that those IDB's were not data cleared.
TEST 1:
You will see that IDB and LS and Cookies was filled in the previous visit and not cleared.
TEST 2:
You will see that IDB, LS and Cookies were cleared and does not show the data from the last visit on revisit.
I haven't got time to look at this. This sucks, I had a bookmark for testing IDB. You added a title, and then some text. Add as many as you want. When you came back it would display your stored data. This is all you need - because what you entered was unique
https://demo.agektmr.com/storage/ looks like it might do the job for a test.
I have looked into a source code and I guess my guessing was right... the code injection is involved.
This is the code injected:
const oldIndexedDBOpen = window.indexedDB.open;
function newIndexedDBOpen(arg1) {
const e = new CustomEvent("new_indexdb", {
detail: arg1
});
document.dispatchEvent(e);
return oldIndexedDBOpen.apply(this, [arg1]);
}
newIndexedDBOpen.bind(window.indexedDB);
window.indexedDB.open = newIndexedDBOpen;
This opens all your valid questions about if this is good or not. As I said before, I am not using this extension and I am not intent to use it, since everything is covered by TC here.
Cheers
This is it: https://static.raymondcamden.com/demos/2014/feb/7/index.html#/home
Now use that to stick in unique data
^^ I have now tested on the page you suggested and revisiting the page comes out clean, IDB cleared.
But since we have a working solution already, I would not analyze this extension anymore... and it raises too many other valid questions... detection of this extension, CSP, does it work when JS is disabled... and on and on.
First of all, it doesn't clean up after itself, because naturally, it only cleans when you open the domain, and it can't wipe folders etc. But you can do all that, sanitizing, on close. There is an option to whitelist until close (in other words, session a domain), but to me that's the same as whitelisting, because you clear on close.
The very first time I tested it, the extension didn't do anything, until I clicked on settings on the icon. And even though I didn't change anything, THEN it worked. I hope that was just a one off and not for every domain.
- make sure nothing is deleted on close (keep all cookies and shit)
- visit test page
- add a note, e.g Chocolate Cake
- close page
- open page (see your note is still there)
- close browser
- open browser & visit page (see your note is still there)
- close page (don't leave it open)
- enable/install extension
- close browser (important)
- open browser, go to test page - your data is still there
^^ FAILED on first attempt
- after that it seemed to work as expected
Here are my french fries
As long as I closed all instances of the test page, the next time I opened it, it was emptied. This will leave persistent disk data (forensics), but will prevent the website from accessing it next time you visit.
I do not know about the rest of the potential issues such as detecting script injection, function names, leaking UUID, and causing issues with other extensions, or even failing on sites with CSP or whatever.
That said, it's a metric fuck-tonne better than C-AD regards leaving orphaned data around to be re-used to track you
does it work when JS is disabled
Well, without JS, the data can't be read either
I do have questions over the third party data, it should be cleaning ALL data by first party IMO, and I'm not in a position to test it all - but using the storage inspector and a site that uses IDB on 3rd party should would be a start
First time it failed because you had the test page opened in time you had installed WE. ;)
Well, without JS, the data can't be read either
Sure it is, but what when you enable JS in the middle of loading page?
previous threads #294 #211 #12 woo... the old issue of 294 is a palindrome of this issue 492 ... spooky :ghost:
Use this issue for extension announcements: new, gone-to-sh*t, recommendations for adding or dropping in the wiki list 4.1: Extensions. Stick to privacy and security related items
:small_orange_diamond: possible additions
:small_orange_diamond: nah feel free to discuss
...