patcg-individual-drafts / private-ad-measurement

Privacy preserving advertising attribution
7 stars 1 forks source link

Clarifying Google’s sparse histogram use case for PAM #9

Open bmcase opened 10 months ago

bmcase commented 10 months ago

Clarifying Google’s sparse histogram use case for PAM

I’ll open this issue here on PAM, but it is more of a question directed to @csharrison. Charlie, I’d like to clarify my understanding of the use case you are trying to solve for with the sparse histograms that you talked about in the PAM ad hoc call and expressed the need for having a very large # of adIDs sent to the device for the mapping table used to generate Advertiser reports.

In showing ads on the open web, there are three use cases that seem like they could be related to what you’re looking for (informed by Ben Savage’s experience with Meta’s Audience Network):

  1. We want to provide the Advertiser measurement of conversions across their ads. What we don’t think Advs want to see is a breakdown of how many conversions resulted from ads shown on each of the 10,000s of sites their ad was shown on. Rather what they want to see is the results grouped by more actionable breakdowns like different creatives they have used. In this case, we don’t see the need for the mapping table sent to the device to be huge – it can just be on the order of actual distinct creatives or ads the Adv wants to measure.
  2. An Adv may actually want to see a breakdown of how many times their ad was shown on each of the 10,000s of sites for the purpose of brand safety (the Adv caring to know their ad is’t appearing on sites they don’t want their brand associated with). But this problem can be solved without any cross-site measurement because we don’t need to consider if there was a conversion and the ad network can just count how many impressions from the Adv are shown on each site and give the Adv this report.
  3. The third use case that really gets interesting is to support an ad network that needs to understand the post-click conversion rate for different site_ad pairs. What happens often is that the same ad shown on different sites may lead to much different rates of deep funnel conversions. This is because some sites are poorly designed and result in a lot of accidental clicks. The ad network needs to know this about different sites to incorporate into their bid model telling the Adv how much they should bid to show their ad on different sites. What we want for calibrating this is to see a breakdown of how many conversions result from every site_ad pair – that may be too many breakdowns to get good signal/noise ratio so we could coursen ad to ad_set or even put similar sites together so we have enough traffic to measure.

My understanding is you’re trying to solve something like this 3rd use case using Advertiser reports which is why you need to ship down a set of adIDs roughly the size of the # of sites. I’ve been thinking about how you might be able to solve for this 3rd use case using PAM publisher reports and keeping this huge mapping off the device. Luke clarified that doing what we usually call “late binding” of breakdown keys to publisher reports seems reasonable in PAM. In fact PPM has an issue to potentially support just this in PRIO in letting shares come with labels and then the query is just to aggregation all things with the same label.

I think this can let us solve the 3rd use case in the following way:

Charlie, can you clarify if these are the use cases you’re trying to support or if there is a further complex use case? Luke, if you see something about this construction PAM couldn’t support please let me know.

simon-friedberger commented 10 months ago

Afaiu large amounts of IDs are intentionally prevented by most proposals to prevent their usage for tracking. If they are necessary for some use-case their leakage should be analyzed.

csharrison commented 10 months ago

Thanks for filing this issue @bmcase . So for the "sparse" histogram case the prototypical use-case of publisher breakdowns (documented in https://github.com/WICG/attribution-reporting-api/issues/583) can be solved with publisher reports as you describe.

I want to emphasize two things though:

bmcase commented 10 months ago

@csharrison thanks for clarifying. I agree that delays for publisher reports are a concern.

My more major concern in the F2F meeting was around "dense + large" histograms, rather than the truly sparse case (where I believe sketching techniques will also work).

I would think that "dense + large" should also be able to supported through publisher reports as described above. Was there a reason besides delays that you were thinking we'd need to use Advertiser reports for the "dense+large" case?

csharrison commented 10 months ago

I would think that "dense + large" should also be able to supported through publisher reports as described above. Was there a reason besides delays that you were thinking we'd need to use Advertiser reports for the "dense+large" case?

Hm that's a good question. It's kind of hard to answer given that delays are so important for the publisher reports. Even if delays were reduced and you could requery across multiple windows (like ARA event-level reports supports), it might require composition / more noise. Maybe there could be a better solution here though!

My impression is that for non-optimization use-cases, advertiser reports are more natural so that's what I was focusing on.