privacycg / CHIPS

A proposal for a cookie attribute to partition cross-site cookies by top-level site
Other
116 stars 29 forks source link

Reduce overall per-site per-partition limit to 1 KiB #74

Open annevk opened 9 months ago

annevk commented 9 months ago

Having discussed this with colleagues this would be a limit we're comfortable with.

It might also be good to have a slightly lower limit than 180 that is applicable for partitioned cookies. E.g., 40 as was suggested before?

We should also carefully define what happens when you hit the limit.

krgovind commented 9 months ago

Thanks for the feedback, @annevk! We're open to considering a lower limit; and in terms of next steps would like to:

@annevk - I was also curious about how you arrived upon the conclusion that 1KiB would resolve the performance concerns raised on WebKit/standards-positions/issues/50? e.g. Is it based on looking at data, or instinct derived from debugging latency issues, etc.?

annevk commented 9 months ago

Thank you @krgovind for the quick response, that's great to hear!

As for your question: I'm not sure it would fully resolve the concerns, but it seemed like a reasonable cap for the use cases presented thus far and if a variety of websites were to go all in on this technology it wouldn't regress memory overhead too much by our estimation.

krgovind commented 9 months ago

Tagging @LGraber @cmawhorter since you expressed interest in CHIPS as a web developer on #66, and @nicjansma since you gave us feedback on the previous memory limit in #48 - I was wondering if you have any concerns with limiting partitioned cookies to a memory limit of 1 KiB per partition (i.e. per top-level site, embedded-site pair); instead of Chrome's current implementation which has ~4 KiB~ 10KiB as the limit?

Also, @erik-anderson @bvandersloot-mozilla - please let us know if you have any concerns as a browser representative.

elreydetodo commented 8 months ago

A limit of 1KiB seems far too small to me. I am one of the architects for Akamai's security products, and the overhead from our cookies alone can exceed 1KiB. Our cookies are used to protect sites from scraping, transaction abuse, account opening fraud, and various other types of abusive or fraudulent activity. These sorts of activities can cost the site operation substantial amounts of money, and some forms of transaction abuse can infuriate other users of the site (think about a sneaker sale event where bots buy all the shoes and real users get none). It's really important that products like ours continue to function.

While our use of cookies is normally first party (i.e. same domain as the request we send them with), when one of our protected pages is rendered within a cross-domain iframe they become third party from the perspective of the browser. This has always been a problem for Safari users because all third party cookies are dropped, and sometimes that's not actually what you want to happen.

It sounds like by going down this CHIPS path you are considering allowing some third party cookies in Safari, but partitioned and with limits. Allowing some cookies is better than allowing none of them, but the limit of 1KiB is simply too small. Our cookies have a minimum size of ~350 bytes due to encryption and various security measures embedded within them, and a user might receive as many as 6 cookies depending on features our customer has enabled for their site.

We have a number of concerns about the cookies that go past what you would normally care about when assigning a user a cookie. We have to prevent things like value tampering, replay attacks, and cookie harvesting. Doing all of that requires embedding verifiable details about the original cookie recipient, and that takes space.

rfreire commented 8 months ago

+1 to @elreydetodo comment.

We're using CHIPS to support cross-site/cross-tenant fraud prevention initiatives at MercadoLibre and 1Kb per site per partition seems quite short.

We're ok with the 10Kb limit in Chrome and it would be better to have same limits everywhere to leverage interoperability and avoid having multiple implementations depending on the browser (as we have today).

annevk commented 8 months ago

Is the limit in Chrome 10 KiB or 4 KiB? (I also thought it was 10, but @krgovind stated 4 above.)

@rfreire it would be helpful if you stated your minimum. For @elreydetodo use case it seems like 2 KiB would suffice.

krgovind commented 8 months ago

Is the limit in Chrome 10 KiB or 4 KiB? (I also thought it was 10, but @krgovind stated 4 above.)

Sorry, I corrected my comment. The limit in Chrome is indeed 10 KiB.

elreydetodo commented 8 months ago

@annevk 6 * 350 = 2100 bytes is the minimum possible size for all of our features enabled without actually counting the data for those features. The 2100 bytes is just the overhead for having the features enabled at all.

Additionally, the customer might have other products operating that require additional cookies with their own overhead. 2KiB isn't enough either. We're one product that might need cookies among potentially many. That 10KiB limit in Chrome was picked after looking at actual usage data. It wasn't just picked out of thin air.

johnwilander commented 8 months ago

We have a number of concerns about the cookies that go past what you would normally care about when assigning a user a cookie. We have to prevent things like value tampering, replay attacks, and cookie harvesting. Doing all of that requires embedding verifiable details about the original cookie recipient, and that takes space.

This sounds to me like information the server wants, and that needs to be validated. The information is stored on the client with cryptographic proof/protection that lets the server trust it. Is that right?

elreydetodo commented 8 months ago

We have a number of concerns about the cookies that go past what you would normally care about when assigning a user a cookie. We have to prevent things like value tampering, replay attacks, and cookie harvesting. Doing all of that requires embedding verifiable details about the original cookie recipient, and that takes space.

This sounds to me like information the server wants, and that needs to be validated. The information is stored on the client with cryptographic proof/protection that lets the server trust it. Is that right?

The cookie data we use for those purposes can be described as client and session integrity data. The origin (actually the CDN, in our case) does verify it for consistency, and if any of the details are inconsistent it feeds into risk scoring for the request. This is on top of the real cookie payload/purpose, which also play into risk scoring (assuming the cookie wasn't found to be invalid).

Dsbryant commented 8 months ago

Having discussed this with colleagues this would be a limit we're comfortable with.

It might also be good to have a slightly lower limit than 180 that is applicable for partitioned cookies. E.g., 40 as was suggested before?

We should also carefully define what happens when you hit the limit.

[]()

estein-de commented 8 months ago

Please do not make the limits even stronger than they are. Many of us work in environments where we are trying to operate multiple pieces of software written in multiple platforms that do not all work the same way, on the same domain, for use by customers who need authenticated access to embedded content. To be clear I do not work at a company that is in the ad or third party tracking space at all - our users are paying us for providing services that are delivered in many embedded contexts, and all the users are actively trying to use the authenticated content.

Arbitrarily reducing this seems like it has little upside and a great deal of harm that will be done to many developers, who may not even be aware of this GitHub project to find these discussions on. CHIPS and Partitioned cookies are hard enough to manage the transition to as they are without making it worse by severely restricting storage space.

tteggel commented 6 months ago
  • Get feedback from the developer community to ensure that this doesn't break any known use-cases before moving forward.

I work for Book Creator which is an educational content creation platform. We offer strong privacy guarantees to our users with no tracking (other than what we use internally for support) or advertising of any kind. We are in the process of upgrading our integration into various Learning Management Systems so that we can offer an embedded UX to teachers and students. @claudevervoort sums up this use-case nicely. This requires that we run in an iFrame, so all our cookies are 3P.

Our current cookie load is 2.9Kb for access token, CSRF token and IDs for support systems. We could probably trim this a bit by offering worse support UX.

We would love to serve Safari/WebKit users the same seamless UX that we are able to offer in other browsers but as a bootstrapped startup the cost of re-engineering our auth systems is too high, so today we offer reduced functionality and a janky popup-based UX in Safari. Additionally, even if we were to re-work our auth to support the Safari way we would lose important defence-in-depth security protections such as HttpOnly, and introduce extra complexity as we would have to re-write all our asset src attributes to include an auth token.

  • Look at Chrome metrics to ensure this won't break [...] We're in the process of adding these.

I'd be interested to see these metrics. Is there somewhere I can track the work @krgovind? Or is there some data you can share yet?

estein-de commented 4 months ago

I would also like to know if there is any analysis of real world impacts of this change that we can follow along with, @annevk or @krgovind.

edgul commented 4 months ago

For context I am working on implementing this feature in gecko.

Regarding

We should also carefully define what happens when you hit the limit.

There has been some suggestion that implementations will reject cookies that exceed the per partition byte capacity (here and here)

If we assume a per-partition byte limit of 10KB, then it's not hard to imagine the scenario where 10 cookies of 1KB (or perhaps 100 cookies of 100B) are set which stay valid for the foreseeable future and then the next cookie comes along.

A couple questions come to mind for other implementers:

  1. Is this use case all that common enough to be worrying about?
  2. What is the intended/implemented behaviour? Outright rejection of the newest cookie? Removal of the oldest (still valid) cookie?