IRTF-PEARG / draft-ip-address-privacy

Internet-Draft on IP address privacy
http://pearg.org/draft-ip-address-privacy/
Other
17 stars 5 forks source link

Counterabuse: law enforcement support. #6

Closed jbradleychen closed 1 year ago

jbradleychen commented 3 years ago

What support should the Internet provide to support legitimate law enforcement inquiries and referrals?

sysrqb commented 3 years ago

The framing of this question makes answering it complicated. A motivated attacker/criminal/actor will always find ways to obscure their identity and mislead investigators. A service should not collect more information than it needs "just in case" under the assumption that information could support law enforcement in the future.

Internet protocols and service providers should practice data minimization, as per RFC 6973.

I'll try posing a slightly different question in the hope of finding a satisfactory answer: What information about users do service providers need to successfully provide their service, and protect themselves, that can also be used by law enforcement?

The answer will vary depending on the service, but enumerating the different types of services and the specific information they need will be helpful.

jbradleychen commented 3 years ago

"can also be used by law enforcement" seems too weak to me. Your framing seems to rely on the assumption of a fair-minded user, e.g. that users are generally not abusive. That assumption collapses when services become very large, and effectively include everybody, which means they include all the criminals, criminals who are statistically insignificant but do disproportionate harm to the public and the platforms.

Your reframing equates success with the success and safety of providers. Can you reformulate it to recognize the importance of public safety?

Your reframing ignores the reality of corrupt and lawless service providers, whose success should not be a priority.

sysrqb commented 3 years ago

These are good points. The reframing I chose specifically equates success in terms of the service provider and user, without providing exceptional access for law enforcement. Public safety is a much larger scope than I originally envisioned.

The framing should capture the importance of public safety, malicious actors, and potentially corrupt and lawless service provides. Can you define/describe public safety within this context and the responsibility (you see) service providers have?

jbradleychen commented 3 years ago

Hmm; I think we should try to realize online systems that are at least as private and as safe as the offline alternatives, as much as offline alternatives exist. If we do that, then the public gets better privacy and safety by moving online, avoiding regressions.

If this makes sense to you, then we could think about what tech has done for privacy, and what it has done for safety.

jbradleychen commented 3 years ago

As system designers, I think our responsibility is to design systems that, over time, favor honesty and truth over dishonesty and deception. Privacy can support honesty and truth by protecting vulnerable individuals, enabling them to live in safety and in some cases to share truth more safely. Privacy is not enough though; it also protects dishonest individuals. Is there an argument that privacy favors honesty over deception?

For individuals whose actions impact public safety, Identity Transparency protects honesty and truth by making it possible for the public to hold them accountable. Without accountability, dishonesty has no consequences. Accountability can enable technology to favor truth and support public safety.

sysrqb commented 3 years ago

This is helpful, thank you. I agree, in general, about replicating (and enhancing) the privacy and safety we have in physical spaces.

There are certainly examples of privacy being necessary for honesty and accountability, as you alluded to (e.g., whistleblowers, political dissidents, human rights defenders, assault/abuse reporters). We know safety nets within society are abused when they are available, and some misuse and deception must be tolerated if the system exists. The threshold of allowed abuse should depend on other factors (e.g., impact/harm resulting from said abuse). I don't know of a specific argument that privacy favors honesty, but I don't know of an argument for privacy favoring deception, either. However, we do know that privacy and security are human rights, and we should therefore build systems that protect people from arbitrary privacy invasion while still limiting deception/misuse/abuse within and of the systems.

I interpret "Identity Transparency" as meaning "revealing the identity of the person who is taking, or responsible for, a specific action". If this is correct, then I agree this transparency can be used for holding people accountable. However, we know that neither retroactive attribution nor up-front identification was sufficient deterrence in multiple well known cases over the last few years, so we should not presume this is a "silver bullet" of dishonesty or malicious activity online.

I think we need a more precise definition and understanding of public safety before we can discuss requirements/expectations associated with it. I am concerned that vague descriptions of public safety will only lead to ineffective mitigations/solutions while creating data-lakes of personal information.

I suspect that separating public safety from law enforcement will be important. however in some cases supporting public safety may require cooperating with law enforcement to some extent.

Would looking at a few specific (offline and online) examples be helpful for describing their responsibility with respect to public safety, and then we can try extrapolating from there? Looking at what technology has done for privacy and safety may be helpful, too, so we have a shared foundation.

I'm not sure working through all of this in comments on an Issue is the best work flow. Should we try drafting another document for these descriptions as a supplement to the original IP address privacy draft? I think this is an important subject, but these details are tangential to the goals of the original draft.

jbradleychen commented 3 years ago

I agree that identity transparency does not prevent all abuse, but preventing all abuse is not the right goal. In any system with free speech and due process there will be abuse, and I think we need to protect free speech and due process.

Identity transparency makes accountability possible, and I would argue that making accountability possible is a reasonable goal. When accountability is impossible, abuse has no consequences, and the platform is indifferent to truth and honesty.

Requiring identity transparency is a bad idea, but ignoring it is not so great either. In a platform that supports both anonymity and identity transparency, transparent actors are accountable, so they are subject to incentives for truth and honesty, and are incrementally more trustworthy.

Your suggestion to explore examples and consider another doc are reasonable. Or maybe a conversation. I hesitated to jump-in here because I don't want to obstruct useful progress on privacy. There may even be cases where we must tolerate a regression in safety in deference to privacy, but I don't think we should let that happen by accident.

bslassey commented 1 year ago

The current draft discusses address escrow, which seems to be aimed at this issue. Can this be closed out?

sysrqb commented 1 year ago

Hearing no objections, we'll close this issue.