bitcoin-core / meta

6 stars 5 forks source link

Moderation guidelines feedback #1

Closed achow101 closed 3 months ago

achow101 commented 4 months ago

Previous discussion of the guidelines was at https://gist.github.com/achow101/9192ad26dc4ef08e9c899caeddc968ef

josibake commented 4 months ago

@ryanofsky still a bit confused by this sentence:

Moderators may remove comments that do not contribute new information or constructively reframe existing information.

IIUC, its saying that moderators may remove comments that "constructively reframe existing information." Constructively reframing existing information seems like a good thing to me. For example, there are several comments from two separate people and a third person comes in and summarizes the arguments both are making. I would view that as something that contributes to the discussion in a meaningful way but it seems like this sentences is discouraging that?

harding commented 4 months ago

I think that sentence is meant to read: "...that do not either contribute new information or constructively reframe existing information."

josibake commented 4 months ago

I think that sentence is meant to read: "...that do not either contribute new information or constructively reframe existing information."

Ah, makes more sense. I still find the wording to be a bit ambiguous, what about something like "If a comment does not constructively reframe existing information or does not contribute any new information, it may be removed" ?

harding commented 4 months ago

From the proposed policy:

Meta-discussions about moderation decisions or moderation policy are off-topic in Bitcoin Core software projects. ... For borderline posts that are not outright spam or attacks, but don't meet standards of discussion, moderators may choose to hide the comments

To me, this implies that off-topic discussion about moderation will be hidden (at least on the first offense). I think it would be advantageous to say that any posts about moderation outside of bitcoin-core/meta will be deleted (even if they have other merits).

The problem I'm concerned about here is:

Even though the comments are being hidden, they begin to represent a parallel conversation that undermines the goals of having moderation and of having discussion of moderation happen on a separate repository. It's like how /ignore on IRC and Mute on X/Twitter can be ineffective when too many other people you know aren't ignoring or muting. Making it clear that C's and D's comments will be deleted as soon as a moderator sees them will help ensure discussion about moderation is kept separate from technical discussion.

Suggested edit:

diff --git a/MODERATION-GUIDELINES.md b/MODERATION-GUIDELINES.md
index eb99cf9..7d5968d 100644
--- a/MODERATION-GUIDELINES.md
+++ b/MODERATION-GUIDELINES.md
@@ -8,7 +8,7 @@ On the internet, different discussion forums will have different cultural norms,
 - Comments will be about ideas, not people.
 - Comments may offer pointed criticism, if it is criticism about specific technical ideas or decisions, not general criticism, or criticism of individuals or groups. Even the smartest people can have ideas that don't work out, and people with good intentions can make decisions that backfire. It does not add a lot of value generally to speculate about peoples motives or capabilities when discussing the merits of their ideas, and doing so will be considered off-topic in technical discussions.
 - Comments should contribute new information. A certain amount of repetition is critical for learning, dissemination of ideas, and understanding in communication, but it can be taken too far when the same information is repeated in the same context. Moderators may remove comments that do not contribute new information or constructively reframe existing information.
-- Meta-discussions about moderation decisions or moderation policy are off-topic in Bitcoin Core software projects. If you wish to discuss moderation, please file an issue in [bitcoin-core/meta](https://github.com/bitcoin-core/meta/issues/new), which is the home for project management issues, and nontechnical discussions relating to development of the codebase.
+- Comments about moderation decisions or moderation policy are off-topic in Bitcoin Core software projects and will be deleted. If you wish to discuss moderation, please file an issue in [bitcoin-core/meta](https://github.com/bitcoin-core/meta/issues/new), which is the home for project management issues, and nontechnical discussions relating to development of the codebase.

 ## Role of Moderators

When moderators ban someone, there should be an issue posted https://github.com/bitcoin-core/meta/issues to provide a public indication that some discussion was suppressed. This could be in dedicated issues, or in a catch-all issue for temporary bans. To avoid embarrassment or potential backlash, the post does need not include name of the person being banned or reasons for the ban.

This sounds like an issue can be opened "[Username withheld] was banned for [reason withheld]", which doesn't seem very useful to me in determining what discussion is being suppressed. I would suggest always including the name/username of the person being banned, but also that the posts announcing bans should be made using a bot that obscures the identity of the banning moderator.

On that note, I think it would be ideal if all actions by moderators were performed by a bot that (1) hid the identity of the specific acting moderator from the public and (2) kept a private record of moderator actions for review and (possibly) time-delayed disclosure. That way any criticism of moderator activity is directed at the entire project rather than individual moderators. I don't think this needs to be moderation v1, but it seems like a useful v2.

mzumsande commented 4 months ago

When moderators ban someone, there should be an issue posted https://github.com/bitcoin-core/meta/issues to provide a public indication that some discussion was suppressed. This could be in dedicated issues, or in a catch-all issue for temporary bans. To avoid embarrassment or potential backlash, the post does need not include name of the person being banned or reasons for the ban.

Simple spammers should be exempt from this, I don't think that any extra bureaucracy is needed for those.

achow101 commented 4 months ago

I think that sentence is meant to read: "...that do not either contribute new information or constructively reframe existing information."

Made this change.

Suggested edit:

Suggestion taken.

Simple spammers should be exempt from this, I don't think that any extra bureaucracy is needed for those.

I've updated the sentence to exclude obvious spam.

I would suggest always including the name/username of the person being banned,

I'm not as sure about this. If they're just named, I think that's likely to invite other people to chime in speculating and/or defending that person, both of which are noisy and cause extra headaches for the moderators. I think it would either have to be that a name and reason is provided, or none at all, just acknowledgement that some action occurred?

On that note, I think it would be ideal if all actions by moderators were performed by a bot that (1) hid the identity of the specific acting moderator from the public and (2) kept a private record of moderator actions for review and (possibly) time-delayed disclosure. That way any criticism of moderator activity is directed at the entire project rather than individual moderators. I don't think this needs to be moderation v1, but it seems like a useful v2.

I'm not sure that a bot would actually be helpful here since none of these actions should be automated. However, we could just make "bitcoin core moderator 1", "bitcoin core moderator 2", etc. accounts so that the moderator identities are more hidden. That would also fit with the idea of rotating moderators periodically as those accounts can be recycled and handed over to new moderators. But a few concerns there are about the increased friction to being able to take moderator actions, and also whether having separate accounts even affords any real privacy.

willcl-ark commented 4 months ago

For clarity, would comments made outside of the bitcoin core repositor[y|ies] ever be expected to translate into moderation actions here?

My read of the initial paragraphs, and which I would agree with personally, would be that they would not.

Of course there might be some hypothetical extreme circumstance in which it may make sense, but by and large I think it's simpler for moderators to only consider internal repository actions.

ryanofsky commented 4 months ago

re: jonatack's comments [1] [2] I think it would be good to have a separate document with tips and positive advice on how review and discuss things productively. I think the moderation guidelines could link to a separate document with that information, but should avoid trying to be too prescriptive themselves.


re: josibake's comment https://github.com/bitcoin-core/meta/issues/1#issuecomment-2094121544 about reframing existing information, that was just clumsy wording and the suggestion makes it clearer.


re: harding's comment https://github.com/bitcoin-core/meta/issues/1#issuecomment-2094361530 about hidden comments potentially leading to "parallel conversations," that seems like a good point and it's something I didn't think of. I agree with the suggested edit to just say certain comments will be deleted.

Taking a step back, I think this document has two potential audiences. The first audience is people who have had some moderation action taken against them. The hope would be they could read this, not take things personally, and learn how to get their message across while following the guidelines. The second audience is actual moderators and people interested in the topic of moderation.

The first section is targeted at the first audience, and I think should set the simple expectation that any post that crosses the lines can be deleted.

The remaining sections should add more nuance, making a distinction between hiding and deleting comments, and giving moderators the flexibility to hide comments that have problems but contribute some useful information, instead of deleting them.

I think moderators should have a lot of discretion and a lot of options they can choose, at least to start off. We can see what works and doesn't work before before becoming more rigid.


re: harding's comment https://github.com/bitcoin-core/meta/issues/1#issuecomment-2094361530 about opening new issues with "[Username withheld] was banned for [reason withheld]," that is definitely not the intent!

The "When moderators ban someone..." bullet point is trying to be brief and trying to allow for flexibility in handling different situations, but right now it might be too imprecise.

I think in practice we should have a catch-all github issue open in the bitcoin-core/meta repository to track bans. Then, if/when bans happen, moderators can post simple comments in that issue like "username ryanofsky was banned until May 10 to calm down inflammatory discussion in issues #234 and #456" or "2 users were banned until May 13 for making ad-hominem remarks" or something more minimal like "a user was banned until Jun 1." Just if there is are more complicated problems, separate issues could be opened.

I think it is important for there to be some some public indication of when temporary bans happen for the sake of transparency, so there is some visibility into what moderators are doing. Also for the practical reason that if someone is banned, there should be a place they can check to figure out what is going on and how long they are banned.


re: harding's comment https://github.com/bitcoin-core/meta/issues/1#issuecomment-2094361530 about having a moderator bot to hide moderator's username, I agree it could be useful, especially for "v2".


re: mzumsande's comment https://github.com/bitcoin-core/meta/issues/1#issuecomment-2096256057 about not needing to document bans for spammers. Definitely agree with that, and hopefully the text is clearer now. Permanently banning fake usernames that aren't contributing anything should not require any bureaucracy. But if temporary bans are put in place to calm overheated discussions, those should be visible somewhere and informally tracked.


re: kanzure's comment https://github.com/bitcoin/bitcoin/issues/29507#issuecomment-2100549946 about transparency section imposing too much work, I think we should eliminate anything that is too much work, but it's worth trying something and seeing where the problems are in practice. I also hope being "clear about how decisions are made" is not too high of a standard to meet. But if is, we can weaken that language or rephrase it. On policy documents effectively being "instructions for your adversaries on how to defeat a project" honestly I agree very much with this sentiment, and it's why I think moderators should have a lot of freedom individually to make decisions. But I hope having some transparency after the fact is possible, too. If it doesn't work out in practice, though, we can accept reality and adapt.


re: achow101's comment https://github.com/bitcoin-core/meta/issues/1#issuecomment-2101077244, agree with everything and all the edits look good.


re: willcl-ark's comment https://github.com/bitcoin-core/meta/issues/1#issuecomment-2101113368 about outside comments, maybe it's worth saying something about them, but I think, as noted, current document implies they are not directly relevant because it doesn't consider them. I think when outside discussion spills over and doesn't meet the guidelines, it can be dealt with at that point.

harding commented 4 months ago

Re: https://github.com/bitcoin-core/meta/issues/1#issuecomment-2101077244 by @achow101

I'm not sure that a bot would actually be helpful here since none of these actions should be automated.

I'm more thinking of IRC-style moderation bots that hold room ops and accept private commands from a list of trusted admins. E.g., if user mallory is being a pain in room #foo, admin user alice types /msg foobot ban mallory 7d

In other words, the bot is just a proxy to the GitHub API which adds in some extra features, like logging or automating the process of things like "ban + leave comment in meta repo".

For example, if a simple script coremod was put on a server which you could login to over ssh, you could do something like:

alias coremod="ssh coremod.bitcoincore.org coremod"
# Ban harding and automatically create (or append to) an issue describing why
coremod ban harding "Making simple things way to complicated for everyone"
# Delete an off-topic comment
coremod delete  off-topic "https://github.com/bitcoin-core/meta/issues/1#issuecomment-2094361530"

I think that would pretty low friction for moderators, and it would allow all actions to be publicly performed by the moderator bot account. With a clever script, it could also log who requested the ban and delete actions.

pinheadmz commented 4 months ago

@harding I have a bot running right now for testing: https://t.me/bitcoinmoderatorbot

On my own server I receive all webhooks from the bitcoin core code repo (which I also use to post updates in https://t.me/bitcoincoregithub as well as #bitcoin-core-github on IRC). I filter these webhooks for issue / PR comments and then send each one to a ChatGPT "assistant" that has been prompted with the moderation policy currently on master in this meta repo. The AI responds with either an "OK" or an explanation of how the comment violates the policy, in which case it notifies me (and anyone else) in the telegram channel.

It's set up to be a tool to assist moderators since there can be hundreds of comments in a day and so far is only costing me $0.50/day or so in openAI API credits.

We have been pretty well behaved this week but here is one example of a triggered response:

Screen Shot 2024-05-08 at 4 18 46 PM
1440000bytes commented 4 months ago

I will say it again even if hidden..

These guidelines are bypassed everyday by people who have some influence. It will be used by those against some reviewers to avoid review.

It will be used by governments to win legal cases. It doesn't matter who is involved for them.

midnightmagic commented 4 months ago

I believe you will find that attackers not only don't care about rules, but will use the rules as an explicit vector for attack. These kinds of moderation transparency requirements, and especially the comments about Bitcoin being some kind of organization which implies leaders and followers, will 100% be used against you, especially if there come to be any more lawsuits filed. You have no particular duty to make your decisions to defend your own sanity and/or time from attack a transparent, process-based one.

Your attackers won't be convinced to stop attacking—they don't care.

Bitcoin's population is unique amongst FOSS projects in that some of them are strongly incentivized to attempt to subvert, harm, attack, and abuse people who work on Bitcoin—and when I say strongly incentivized, I mean in the millions or billions of dolllars range. Any rules, CoCs, moderation policy documents, or anything which goes to any particular length to describe literally anything can and will be used as a weapon both against you, and against people who support you in an attempt to erode your base.

Your ability to adapt to a new form of attack also shouldn't be shackled by documents that couldn't have perceived, let alone foreseen, what is or might be coming to make your lives harder.

I strongly recommend you abandon these efforts. They won't serve you. Kanzure is right—they will only serve as a roadmap for attackers whose actual bad intentions are well-funded and fuelled by a small army of people who have the willingness to maintain a literal active daily attack effort for literally more than a decade.

pinheadmz commented 4 months ago

Excuse my naivety but could someone give me an example of how an attacker would use a policy designed to preserve and focus on technical work as an attack? I was under the impression that the moderation policy is designed to mitigate the mental denial of service by participants posting off-topic or non-technical comments

1440000bytes commented 3 months ago

Excuse my naivety but could someone give me an example of how an attacker would use a policy designed to preserve and focus on technical work as an attack? I was under the impression that the moderation policy is designed to mitigate the mental denial of service by participants posting off-topic or non-technical comments

This is just an example and I am sure real attackers can figure out worse things:

willcl-ark commented 3 months ago
  • Developer asks moderators to ban long term contributor for harrasment etc.
  • They do not follow moderation guidelines and just give a warning.
  • Developer goes to court as advised by attacker and paid for it. She cites the moderation guidelines in lawsuit.

I still don't think I fully understand how this would work. What would they be suing for here? Harrassment? A single interaction cannot constitute harassment.

Did you read the moderation procedure before posting this? [edit: I'm curious to know if you've read and disagree with the content of these guidelines, or if you're generally opposed to any form of moderation guidelines?] It clearly details that moderation will necessarily be imperfect, that a series of escalting actions will be available, including requests for self-moderation, hiding unproductive comments, and finally:

If moderation of individual posts fails, moderators will use warnings, timeouts and bans at increasing rates based on prior patterns of behavior. Usually mildly uncivil behavior (such as being off-topic/disruptive or rudeness) will result in a warning or temporary ban (in the order of days). Longer term bans may result from repeated or frequent violations. Moderators will consult with maintainers before making any bans longer than 1 week.

It does not specify exactly which actions should be taken in response to any particular action, except for spam deletion. Other than that the keyword may (I understand this to be an RFC2119 style "may") is used, to highlight that actions may be taken. Therefore it seems unlikely that

  • They do not follow moderation guidelines and just give a warning.

...could be held up in any enforcable way.

Furthermore, a complaint can be raised in this repository regarding any moderation decision, if so desired.

I don't plan to discuss specific "attack" scenarios any further, but my opinion is that I think the guidance is sufficiently optionally-worded, that any "attacks" of this nature would likely not be enforcable and therefore constitute little risk.

achow101 commented 3 months ago

I'm more thinking of IRC-style moderation bots that hold room ops and accept private commands from a list of trusted admins. E.g., if user mallory is being a pain in room #foo, admin user alice types /msg foobot ban mallory 7d

In other words, the bot is just a proxy to the GitHub API which adds in some extra features, like logging or automating the process of things like "ban + leave comment in meta repo".

I think that's a good idea, but maybe overkill for now. That seems like something we could deploy in the future if moderators feel like they need more privacy.

jonatack commented 3 months ago

Suggestion: "pull requests will be aggressively moderated" -> s/aggressively/actively/

Edit: the guidelines were updated with this suggestion -- thank you!

1440000bytes commented 3 months ago

I don't even know why was this created if some people already decided something

Anyways the things I shared won't affect me but I was thinking about others. Not just me and there were 2 more with different opinion but none discussed in meeting apart from regular circle jerk of +1. Maybe that is not respected by chaincode and blockstream anymore.

ariard commented 3 months ago

I strongly recommend you abandon these efforts. They won't serve you. Kanzure is right—they will only serve as a roadmap for attackers whose actual bad intentions are well-funded and fuelled by a small army of people who have the willingness to maintain a literal active daily attack effort for literally more than a decade.

This is certainly very correct that those moderation rules could be exploited by active sabotage from well-funded criminal organizations. No human artifact is perfect, be it this bitcoin codebase, other major open-source softwares, an organization charter or even poetry works. E.g Walt Whitman's Leaves of Grass is excellent, yet you can always say so on the lack of respect of poetric metre compared to classical poesy.

I'm seeing the existence of moderation rules as a necessary trade-off (as one meets a lot in engineering design). On one hand the absence of civility and courtesy norms to police online discussions can quickly lead to a fast downgrade of the quality and depth of complex technical conversations. On the other hand, over compliance and enforcement of those norms can provoke a deleterious effect on the most fruitful ideas or empiric insights to solve daunting technical issues.

So it's more a question of equilibrium like many things in all walks of life. On this regard, of course current moderation rules present loopholes, though they are far more tempered than many so-called code of conduct encumbering other open-source projects.

I think there is an effort on formalizing a set of moderation rules which is somehow culturallly-agnostic. It doesn’t verse in the usual US-special "cultural war" / "victimology-as-entitlement" aspect. Neither use very subjective terminology e.g "if Alice or Bob feels uncomfortable, Caroll is evil and shall be ban ad vitam aeternam". I think this is appreciable, at least speaking on this issue as someone who has been raised in an European culture.

All that said, more bullet-points style recommendations.

1) I think it should have a moderation@bitcoincore.org. If you're in a situation of conflict of interests with someone with moderation privileges, and you believe it's altering its moderation decision-making process, at least you can mention the issue on a non-public communication channel and if justified asks that this moderator does not intervene in moderation decisions related to your persona. Conflict of interests hasn't started in the open-source world with bitcoin, they were doing that very well in the 90s with all the BSD forks, if my memory is correct.

2) I think the moderator set shall be public and dissociated from any other responsibilities such as being a maintainer or a sec-list recipient. Publicity as this is the norm in the open-source world (e.g Rust has a public moderation team) and formally this makes the difference between moderators being guardians of the civility and courtesy norms or belonging to a blackmail club (IIRC petertodd raised the same concern about publicity when moderation was proposed a long time ago). Dissociated as this avoids a concentration of privilegied permsissions on same github accounts (this project is already relying far too much on github platform sec sighs). There is already a default fallback to maintainers if needed.

3) I think the substantial moderation decisions (account-based) shall be archived in a dedicated repository and those substantial moderation decisions shall be in public by default. Building an archive shall generate a stability of moderation decisions, and this minimizes phenomena like a moderator taking a fast-paced substantial moderation decisions. E.g a new appointed moderator finding themselves busy attending the twitter space of the moment to play the "thought leader" or whatever. If you have too much work as a moderator just ask to have new moderators appointed.

4) I think direct private reachout to the known owner of a github account in the names of moderation, whatever the communication channel shall be disregarded. Formalism shall be regarded as a value in itself in the conduct of a moderation issue, and availability of private point of contacts shall not be abused for this purpose. If you have to, tag people publicly and ask them to reach out on a designated endpoint, at a time of their convenience. Again see 1) and 2) and publicity shall be the principle / privacy the exception.

Finally and I think this is something which is less formalizable, it's more ethics. "Never take a mod decision that you won't be able to assume eyes in the eyes of the user, if one day you meet that human being in person". In my experience, this is an ethical guideline worthy of meditation if you're involved with security stuff.

pinheadmz commented 3 months ago

@achow101 I suggest:

  1. Close this issue
  2. Enable "Discussions" for this repo, where I would like to start some threads with @ariard and @1440000bytes regarding attack vectors in moderation policy
  3. All other suggestions for policies including "erase the entire thing" should be submitted as pull requests just like everything else we do as an organization.
achow101 commented 3 months ago

@pinheadmz Done

ariard commented 3 months ago

@pinheadmz Done.

By the way do you confirm being a moderator or someone entitled with project permissions to do moderation, yes or no ? For the clarity of the “meta” discussion itself, and your interest at stake.

pinheadmz commented 3 months ago

@ariard yeah I'm happy to help

https://bitcoin-irc.chaincode.com/bitcoin-core-dev/2024-05-16#1026806;

ariard commented 3 months ago

https://bitcoin-irc.chaincode.com/bitcoin-core-dev/2024-05-16#1026806;

ok, thanks for the info.