DPGAlliance / DPG-Standard

Digital Public Goods Standard
Creative Commons Attribution Share Alike 4.0 International
104 stars 42 forks source link

Indicator 9. Do No Harm #89

Closed llsandell closed 2 years ago

llsandell commented 2 years ago

Although this is noble, I find it hard to see how this can be demonstrated, verified or even enforced. I would like to see a clarification as to what “harm” means in this context.

Examples: most social medias are set up for people to interact, share thoughts and ideas and have a good time in general. A lot of what we know from social medias are directly transferrable to other kinds of applications, systems or programs. We still know that people have been abused, bullied, harassed and even committed suicide through interactions on social media platforms. That is harm

We also know social media platforms today are used for selling drugs, illegal contraband, weapons, human trafficking and much more. That is harm, either indirectly or directly.

Another form of harm is harm induced by someone gaining access to personal information and/ or private information, either through exploiting technical flaws or in other ways engaging in unlawfully accessing data, using it to target specific people or groups. (Religious and political beliefs, sexual orientation, financial matters, health and mental health records, to name a few) Several Sony employees fell victims to extortion through a third-party service handling the employees’ medical data. This ended up doing harm to both individuals and the company.

We also know that user created content being publicly available, has led to prosecution of people who does not share the views of regimes, religious organizations and political adversaries. Where does the project´s responsibility end, and where does the user’s responsibility start? Without a clarification, this will appear more like a “buzz word term”, or an empty phrase. To me, this should either be explained further, or incorporated in Indicators 9.a-9.c

prajectory commented 2 years ago

https://github.com/DPGAlliance/DPG-Standard/pull/73

Looping in a discussion on this topic that has already taken place.

prajectory commented 2 years ago

Do No Harm is a concept laid out by the UN Secretary General in their roadmap for digital co-operation. If you look at the DPG standard you will realise that we have tried to adopt this and bring it to life in a way that the concept becomes "feasible" and "actionable". Which is why we clearly mention Do no harm by design.

It would encompass all the examples that you have mentioned above. But we have tried to bucket them under 9a,9b and 9c. We want to ensure that the project has thought of risk mitigation steps and have the necessary policies that are built into design for security and safety of both the project's assets and assets of the users which they deal with.

These are some of the questions that we ask under the umbrella of "Do No Harm"

Has this project taken steps to anticipate, prevent and do no harm? On the whole, does this project take steps to ensure that it anticipates, prevents and does no harm?Please describe any additional risks and mitigation steps that this project uses to prevent harm.
9.a. Data Privacy & Security Does this project collect or store personally identifiable information (PII) data?If yes - please list the types of data collected and/or stored by the project:If yes - does this project share this data with third parties?Please describe the circumstances with which this project shares data with third parties. Please add links as relevant.If yes - does the project ensure the privacy and security of this data and has it taken steps to prevent adverse impacts resulting from its collection, storage and distribution.If yes - please describe the steps, and include a link to the privacy policy and/or terms of service:
9.b. Inappropriate & Illegal Content Does this project collect, store or distribute content?If yes - what kinds of content does this project, collect, store or distribute? (i.e. childrens books)If yes - does this project have policies that describe what is considered innappropriate content? (i.e. child sexual abuse materials)If yes - please link to the relevant policy/guidelines/documentation.If yes - does this project have mechanisms for detecting and moderating innappropriate/illegal content?If yes - please describe the mechanism for detecting, reporting and removing innapropriate/illegal content (Please include the average response time for assessment and/or action. Link to any policies or descriptions of how inappropriate content is handled):
9.c. Protection from harassment Does this project facilitate interactions with or between users or contributors?If yes - does the project take steps to address the safety and security of underage users?If yes - please describe the steps this project takes to address risk or prevent access by underage users:If yes - does the project help users and contributors protect themselves against grief, abuse, and harassment.If yes - please describe the steps taken to help users protect themselves.

I totally empathise with what you said. Its always difficult to figure out where the buck stops and therefore we have made it very clear that as an input to becoming a DPG, one has to have thought of potential harm and designed to mitigate it. Things beyond the owner's control need to be mitigated with robust policies that will strive to avoid harmful things. Downstream implications of this are not fully known but as long as there are systems in place to take care of it and they are embedded in the architecture design, that works as a baseline criteria.

prajectory commented 2 years ago

I am closing this issue for now since you have gone deeper into 9a, 9b and 9c too. We can chat about them independently there.