typelevel / governance

Typelevel governance
Creative Commons Attribution 4.0 International
8 stars 1 forks source link

new code of conduct, enforcement policy #129

Closed samspills closed 3 months ago

samspills commented 4 months ago

This PR introduces a new Code of Conduct and Enforcement Policy for Typelevel. These documents are forked from the Python Software Foundation Code of Conduct and Enforcement Policy.

The scope of this Code of Conduct and Enforcement Policy encompasses both organization and affiliate projects (as described in the Typelevel Charter). While the Typelevel Charter has always specified that affiliate projects must adhere to the Typelevel organization policies, including the Code of Conduct, this has not been enforced in practice.

Prior to this change the Typelevel Code of Conduct was a fork of the Scala Code of Conduct. We, the Typelevel Steering Committee, are choosing the Python Software Foundation Code of Conduct to fork because it has an accompanying enforcement policy, and there is associated training available. Some Typelevel Steering Committee members engaged in this training through Otter Technology in late 2023. All Code of Conduct Committee members will be encouraged to take this (or equivalent) training going forward.

We believe our community is already a kind and welcoming place. However, a Code of Conduct must be enforced to maintain community trust and safety. Additionally, an enforcement policy is useful to provide transparency and accountability on how the Code of Conduct Committee will work.

What happens next?

The Code of Conduct and Enforcement Policy must be voted in by the Typelevel Steering Committee. We will then update our site and begin updating the CODE_OF_CONDUCT files in organization project repositories.

As an affiliate project maintainer, what should I expect?

Once the Code of Conduct is voted in, going forward all affiliate projects will be expected to adopt the Typelevel Code of Conduct. We will open pull requests to update each affiliate project's CODE_OF_CONDUCT file. If a project chooses not to adopt the Typelevel Code of Conduct, they can close the PR, and we'll handle removing the project from the affiliate project list. This is totally fine and we support your choices! We believe open source developers are free to choose the projects they contribute to and the communities they support :heart:

Can an affiliate project maintainer participate in Code of Conduct enforcement?

Affiliate project's can list additional moderators in their CODE_OF_CONDUCT file, that the Typelevel Code of Conduct Committee will work with as described in the Enforcement Policy "Affiliate project processes" section. Additionally, if this work interests you, keep an eye out for future calls for Typelevel Code of Conduct Committee members!

samspills commented 4 months ago

Hey @typelevel/maintainers, just pinging to bring your attention to this one ☺️

valencik commented 4 months ago

I'll leave this open for discussion for at least one week before calling the vote.

gemelen commented 4 months ago

I only may suggest to amend this PR message for the sake of clarity to people like me, of who this "we" in the We are choosing ... are.

NthPortal commented 4 months ago

I've been thinking about moderation and community safety for a bit now (including for reasons unrelated to Typelevel), and I realised that while the current (4a87649) enforcement policy addresses large scale, long-term and/or repeating problems decently well, it addresses small scale, real-time and/or high frequency interactions poorly or not at all.

Fundamentally, the more frequent interactions are (e.g. a chat room), the more immediately concerns must be addressed. Moderators cannot wait on the scale of days to address something harmful (particularly to a minority), because the lack of response will be immediately visible and will imply to those who are present at the time that the community considers such behaviour acceptable. Many people will never see a response that happens a day or two later, or out-of-band. Real-time or high frequency communications require equivalently fast responses. (I know moderators cannot necessarily respond immediately, but it needs to happen as soon as someone is available to do so.)

I'll give some examples to hopefully make my concerns clear:

  1. Person A sexually harasses person B in person/privately in a Typelevel space.
    • Person B reports this when no longer in proximity to person A. ✅ The current policy addresses the situation adequately.
    • Person B reports this while unable to leave the proximity of person A. ❌ The current policy is inadequate. Someone needs to enforce that person A does not interact with person B for the remainder of the proximity, starting now. After that is done, the current enforcement policy addresses the situation adequately.
  2. Person A gives a talk in a Typelevel space and uses harmful language in it.
    • Person A only uses non-egregiously harmful language once, or perhaps twice. ✅ The current policy addresses the situation adequately. It is likely unnecessary to interrupt the talk (and embarrass person A), and an announcement can be made after the event that such language is unacceptable.
    • Person A uses non-egregiously harmful language repeatedly. ❌ The current policy is inadequate. The talk must be at minimum be interrupted to instruct person A to stop using the harmful language, and if person A continues to use harmful language (or the cumulative harm up until now is already too much), the talk must be terminated early.
    • Person A uses egregiously harmful language (e.g. the N-word, if person A is not Black) a single time. ❌ The current policy is inadequate. The talk must be terminated immediately.
  3. Person A uses harmful language in a comment on a PR/issue (in a Typelevel-affiliated repository).
    • Person A uses non-egregiously harmful language. ❌ The current policy is inadequate. Someone must step in within a short time period (probably no more than a day or two at most) and assert in-band that the use of such harmful language is unacceptable. Afterward, the current policy is sufficient for (re-)evaluating the immediate moderator action to see if it should be adjusted.
    • Person A uses egregiously harmful language. ❌ The current policy is woefully inadequate. Person A must be removed from the community and have their comment removed or redacted. Afterward, the current policy is sufficient for (re-)evaluating the immediate moderator action to see if it should be adjusted.
  4. Person A uses harmful language in a Typelevel-affiliated chat room (e.g. previously gitter, now Discord).
    • Person A uses non-egregiously harmful language. ❌ The current policy is inadequate. Someone must step in within a short time period (a few hours, maybe slightly more if outside of moderators' waking hours) and assert in-band that the use of such harmful language is unacceptable. Afterward, the current policy is sufficient for (re-)evaluating the immediate moderator action to see if it should be adjusted.
    • Person A uses egregiously harmful language. ❌ The current policy is woefully inadequate. Person A must be removed from the community and have their message(s) removed or redacted. Afterward, the current policy is sufficient for (re-)evaluating the immediate moderator action to see if it should be adjusted.

Obviously, no one likes being called out, admonished or reprimanded in public, so the desire to minimise in-band moderation is understandable. However, when others witness harmful actions and see nothing done, it makes it seem to them as if Typelevel finds the harm acceptable. Some may simply abandon the community long before the process described in the current enforcement policy is completed, or they may never return to the context (e.g. the PR/issue, chat channel, conversation, etc.) to see the later moderation action.

To make things more concrete, I will finish this essay comment with a fully-defined example:

In a chat room, someone made a glib comment about having PTSD from using \ at \, in such a way that it was clear from context that \ merely (manageably) frustrated them. PTSD is an extremely unpleasant mental disorder caused by mental trauma. It can make everyday situations unbearable, it can turn trivial interactions into agony. While certainly not everyone is bothered by this conflation, many (both those who have post-traumatic stress disorder and those who do not) find the use of the term PTSD to describe manageable frustration to be hurtful or upsetting by minimising the severity of the actual disorder. What would have happened if no one spoke up? Was a non-moderator (me) speaking up sufficient to indicate that this type of harmful language is not generally acceptable in the community, even if some people do it? What would the community have been like if moderators had been trained and empowered to call out the harm in-band, rather than having the original chatter start litigating the legitimacy of my objection?

I think the current enforcement policy is necessary, but not nearly sufficient.


I don't actually recall if it was about a tool or at a previous workplace.

jducoeur commented 4 months ago

@NthPortal 's observations are fine food for thought. It's too early in the morning here for me to have fully-baked opinions yet (so please forgive any brain-os and wordiness), but my initial gut reaction to much (not all) of it is that we should think about how to help more of the community feel empowered to speak out when necessary, and how to help provide them with perspective about when that's appropriate. We can't always wait for an official moderator to be available, if there is an issue in-the-moment; if someone's doing harm, the people around them should feel like they can and should start dealing with it, rather than just watching passively and thus implying that it's okay.

(OTOH, I worry about untrained folks deciding that everyone should correct each other routinely -- that can be a recipe for potentially harmful tone policing, and can be weaponized by bad actors. The balance here is subtle.)

So this isn't a trivial question -- we need to think about how the wider TL community plays into all of this, especially when a problem needs to be addressed ASAP. Much of that's going to be more about culture than rules, I suspect (a healthy community is usually 90% about setting expectations properly, so that the formal procedures don't have to come into play too often), but that doesn't happen by accident: we should be thinking about how to steer it.

We should also think about how the practical work of the Discord mods plays into these policies. Much of the above is stuff that the moderators already feel empowered to deal with, I believe, and I don't think we've been thinking about it in this context. But we should make sure that we don't write ourselves into a corner, implying that the mods can't deal with problems that aren't explicitly enumerated here.

I don't think we should make the best the enemy of the good -- not all policy needs to be baked into the written CoC (excessively precise rules almost always leave more gaps), and I suspect some of the solutions here probably are going to be more in the realm of community education than the CoC per se, so I don't think we necessarily need to block the major improvements in this PR until we figure all of this out.

But this is a good reminder that the job's never going to be done: the CoC (and even more, the enforcement policy) are probably always going to be a work in progress, and likely to want further emendation over time. We shouldn't shy away from coming back and updating these documents if questions like the above turn out to warrant it. And we should chew on how best to deal with these in-the-moment use case that require quick action.

samspills commented 4 months ago

@NthPortal thank you for your thoughts <3

In my reading, I think you're raising two separate issues.

In your first scenario, I think what you're pointing out is that our enforcement policy is not written to explicitly handle timely or urgent reports at in-person events. I would expect Code of Conduct Committee members to use their discretion (the existing enforcement policy does not require a delayed response, it just specifies a maximum timeline). That said, I think there is space for a follow up here, to include some kind of in-person event addendum to address timeliness as one of the risk factors. As an aside: at the Code of Conduct training that we took, and are recommending, we did cover in-person events and the handling of urgent reports. Committee members should be prepared for those situations.

In your other scenarios, I think what you're pointing out is a lack of explicit/timely moderation or proactive enforcing of the Code of Conduct. The enforcement policy outlined here is only describing the process of handling a reported Code of Conduct violation. Moderators in the discord channel (or at an in-person event) are empowered to act on the Code of Conduct[^1] in the moment and should report the CoC violation as a follow up. This process isn't described in the enforcement policy and it's valuable to be explicit about that so I will add it. In terms of actually improving moderation, that seems like another follow up point; perhaps there is dedicated moderator training or resources that we should reference? We will look into it.

[^1]: I do believe that the proposed CoC covers all the hypothetical behaviours in your scenarios

valencik commented 4 months ago

@typelevel/steering: I think discussion has resolved enough to call the vote. Open until March 6, 2024, 14:00 UTC and quorum (7) is reached, whichever is later. Please vote with reactions to this comment.

Threshold is 2/3 affirmative.

valencik commented 3 months ago

With 7 approvals, we have reached quorum and the vote is affirmed. Thanks everyone.