WebStandardsFuture / Vision

Repository to iterate on vision document.
21 stars 6 forks source link

Clarify how web technology alone or with government aid can protect people from harm #3

Open joshuakoran opened 3 years ago

joshuakoran commented 3 years ago

This document lists many of the harms that bad actors perpetrate with web technologies. So long as people have free will, there will be bad actors that can abuse any technology (e.g., even the simplest ones like stones).

Balancing the rights of individuals with those of society is often challenging. You allude to this in your section on “unintended consequences” when mentioning the social right to detect fraud and government’s desire to bring bad actors to justice relative to individual’s desire to keep their identity private from their web activity.

The important balance of individual rights with regard to other human rights, such as free speech, is called out in the TAG Ethical Web Principles: “This principle must be balanced with respect for other human rights, and does not imply that individual services on the web must therefore support all speech.”

As the diversity of cultures and organizations have joined the W3C it is clear there is not uniform agreement on many topics relating to social vs individual freedoms, the political sovereignty of non-democratic governments, or to what extent the W3C should try to understand the impact of changes of its standards on market competition or any of government institutions.

Perhaps the document could classify which areas of harms web technology alone ought to address (e.g., preventing malicious code from being installed on web-enabled devices), which areas web technology could help address with the aid of regional governments (e.g., improving fraud detection and bringing bad actors to justice), and which areas are beyond the scope of the W3C (e.g., enabling regional governments to better enforce cultural-specific values).

cwilso commented 3 years ago

Although it doesn't use this wording, I think the document does capture a number of these issues. (E.g., preventing malicious code from being installed on web devices definitely falls under the category of "the Web must be safe for its users"). I'm not sure why we would need to explicitly list all the things that would be out of scope? (E.g., your example of enabling regional governments to better enforce cultural-specific values would not seem to fit in any of the categories we describe.)

joshuakoran commented 3 years ago

Hi Chris,

My core point is that the use of technology is orthogonal to its mere existence.

I did not believe the W3C was established to weigh in on ethical principles of how people use Web technology, but rather to enable them to access, communicate and share information via interoperable web standards.

My example of a "bad" government enforcing its values that we do not agree with was meant to highlight this issue.

We likely would agree that the list below are "bad" actions

• oppression, or encouraging oppression, of vulnerable people or minority groups. • planning, encouragement of or inciting violence against vulnerable people. • promoting fascism, nazism or other authoritarian systems and practices. • perpetuating, promoting, or enacting systematic injustices, such as racism, sexism, homophobia, transphobia.

There are some governments that have laws that do discriminate against certain members of their society, (such as the final bullet above), based on their justification of "culture" or "values." While personally I do not support that, I also believe given the above can be perpetrated independent of the use technology, and thus the development of web standards process should not dictate only certain forms of expression, interaction or sharing is allowed, but focus on interoperable data that allows expression, interaction and sharing. The alternative would transform W3C into the rule book for a Big Brother to impose its unilateral decisions as to what content is right or wrong.

Thus, I hope international laws and governments, which I believe ought to be representative of all the people they govern, should help address the political issues associated with balancing social freedoms (e.g., freedom of the information/press) and individual rights (e.g., right to be forgotten--which could be abused by a criminal).

In short, free will means bad actors will abuse any technology. Our goal might be to ensure they can brought to justice more easily, rather than restricting the ability for even good actors to collaborate on the Open Web.

cwilso commented 3 years ago

I can't agree that bringing into existence a technology is orthogonal to its use.

However, I think we DO agree that our goal is not to dictate forms of express, nor to be Big Brother. Fundamentally, the point of this vision is to underscore that first and foremost, the user should be informed and in control of their destiny. Users come first. USERS should have free will - and if that means knowingly, intentionally enabling bad actors, well, that's their choice.

Our goal is not ensuring bad actors can be brought to justice more easily; the definition of "bad" in this sentence, as you pointed out, is subject to interpretation, and policing actions after-the-fact is not in purview of the W3C anyway. Our goal is to enable users to choose what experiences they want to have, and keep them safe by default. Safe from malicious attacks (malware installation, e.g.), safe from surveillance or other privacy leakage without their knowledge. I think the TAG Ethical Web Principles state this quite well, in https://www.w3.org/2001/tag/doc/ethical-web-principles/#privacy (just above the section you pointed to):

Security and privacy are essential

We will write specs and build platforms in line with our responsibility to our users, knowing that we are making decisions that change their ability to protect their personal data. This data includes their conversations, their financial transactions and how they live their lives. We will start by creating web technologies that create as few risks as possible, and will make sure our users understand what they are risking in using our services.

This introduction identified three areas that are problematic today:

openness & anonymity enable scams, phishing, and fraud.

I would say these problems are because users are not well-informed on who they are dealing with. For example, the strong migration to HTTPS and focus on anti-phishing tools in browsers has dramatically helped the phishing problem.

Ease of gathering personal information spawned business models that mined & sold detailed user behaviors, without people’s awareness or consent.

I believe that that users should be informed of their privacy rights, and should understand what information web properties collect, why they collect it, and fundamentally, they should be in control of that information.

The acceleration of global information sharing enabled misinformation to flourish, be exploited for political or commercial gain, divide societies, and incite hate.

Indeed, this problem is more aspirational. I would personally wish for truthiness to be highlighted; for users to be enabled to make their own choices, but with fact-checking highlighted. I don't know what addressing this problem looks like; I believe it is a critical problem for humanity to address today, however, and the velocity of information spread on the Web makes us a fundamental carrier of information, so we are at least related to this problem. Will the W3C solve it by themselves? Of course not.

cwilso commented 3 years ago

(In case the above diatribe wasn't clear; I'm inclined to close this issue, because I don't want the vision itself to get bogged down in the tactical execution details of how to accomplish these strategic goals; I think that is Strategy work that follows on. However, I am interested in your response.)

LJWatson commented 3 years ago

I did not believe the W3C was established to weigh in on ethical principles of how people use Web technology, but rather to enable them to access, communicate and share information via interoperable web standards.

There is an implicit responsibility in those things, and one that W3C has perhaps not always managed well enough to date - hence this document which envisages things differently.

It is not about governing the way people use technology, but about taking responsibility for creating technologies that enable people to choose how to use them in an informed ways.

There is a reason most medicines taste bad - it informs people that taking them freely is not a smart thing to do. It does not stop someone from doing so, but by design it discourages accidental or excess use. In other words, it's responsible design.

joshuakoran commented 3 years ago

Given we both live in the US there are many social values we share in common. However, returning to this issue I opened, not all of these ought to be determined by technology alone.

I proposed the document would benefit from distinguishing the types of issues the W3C will attempt to address:

For example, within our own society there are many social issues that remain hotly debated (e.g., gun control, abortion, legalizing specific drugs). When looking beyond our society there are far more issues that lack global agreement.
But let’s not focus on issues above, which I suggest fall into the third bucket, and instead look to issues to put in the first bucket.

Where I think we disagree is that users should be informed in advance that a bad actor will perpetrate a future crime. The trouble is only honest criminals would inform people in advance of their malicious intent. Thus, to keep people safe by default, we can either remove access to technology that any bad actors might use to cause some people harm, impair the utility of the technology to reduce risk, or work with government to help bring bad actors to justice.

Taking steak knives as an example, most often they are used appropriately. Unfortunately, sometimes they are used to stab someone. Removing access to or dulling all knives does not seem like the right response given the utility tradeoffs involved for the vast majority of people who use this same technology appropriately.

My point is the use of the technology, rather than its existence, determines whether a technology is used to cause harm.

I believe the development of web standards should not dictate only certain forms of expression, interaction or sharing is allowed, but focus on interoperable data that allows expression, interaction and sharing. Can such interaction cause harm (e.g., hate speech)? Unfortunately, yes. However, we agree to put outside the W3C’s mission dictating appropriate the forms of expression, given the delicate balance of social liberties and individual rights involved with free speech.

We likely agree that people should have greater choice of interacting with more web publishers. Making it easier for publishers to participate on the Web, supports more publishers. Supporting more publishers better represents the diversity of voices around the globe and within each society. Thus, to ensure we do not impair the utility of the Web, we should ensure the mission and technology standards promoted by the W3C do not raise barriers to entry for these publishers that make the Web the vibrant, diverse and wonderful collaborative creation that It is.

I believe reducing entry barriers depends on making it easier for authors, publishers and people to interact with one another via interoperable data and standards. We also agree we should protect people from malicious code that corrupts or otherwise holds a person’s property hostage (e.g., ransom ware). I support HTTPs and preventing man-in-the-middle attacks that interfere with the bi-directional data transfers required to provide engaging and interactive use experiences.

While we seem to agree that the W3C should not govern how people use technology, the current document does seem like it could be used to justify web standards that would reduce the number of publishers that could operate on the Web by deciding for everyone what data ought to be sharable and who can choose to interact with whom.

Thus, perhaps the question to answer is how important is decentralization to the vision and future of the Web?

cwilso commented 3 years ago

I find in the US there are clearly some social values that aren't shared, so I try not to make that assumption. :)

I have a few responses:

1) Things that the W3C should attempt to address at the code layer are really strategic goals, not principles and values (which this Vision is addressing). That work is being done as well, and we're trying to pipeline better, but it's being led by Avneesh on the AB, and any functional changes of course would end up going through the AC as well.

2) I did not say "users should be informed in advance that a bad actor will perpetrate a future crime." I don't think we have any intention of doing that; instead, the point is that we should have the intent of ensuring users understand what they are getting into. I don't want to define bad actor vs good actor here (or elsewhere, really), or what a "crime" is - aside from actual crime, obviously - I want users to be in control and informed of what's going on. To use your steak knives analogy - I'm not suggesting we ban or blunt steak knives; I'm suggesting that you should know if someone is holding a knife in their hand.

I am not suggesting attempting to ban hate speech in the web platform. Indeed, I support the rich variety of content available on the web, and I am empathetic that we need to innovate new ways to support less "central" publishers. However, I do believe that first and foremost, users should be in control of their presence on the Web. I would assume you DO agree that hostage-taking is bad, for example - but I extend that to the hostage-taking of a user's privacy, and I think we need to do better as a society.

3) I admit, I am somewhat sensitive to the term "DEcentralization" - because I simultaneously don't think the Web is centralized, and also see properties on the web that I don't see how they work when there are NOT centers of gravity (not A center of gravity, you understand, but centers). As an example - the scale of Twitter means I can communicate with nearly any of my friends there. Do I know how to email (a much less centralized system) them? Of course, and I do; but that center of gravity is useful. I certainly do not intend to encourage centralization any further, but I also am not on a mission to eradicate any centers of gravity.

I will note that the document DOES contain the line "We must ensure the Web does not favor centralization." It is not ideally placed, and as I intend to separate the history section into a different document, it should stay here; I do think we need to continue to support and innovate new ways to support decentralized properties (and the "long tail", as it were - I frequently speak about this as one of the key aspects of the Web).

cwilso commented 3 years ago

I will note, by the way, that Avneesh has agreed to move the discussion of Strategic Goals here, so those can be discussed here.