matrix-org / matrix-spec-proposals

Proposals for changes to the matrix specification
Apache License 2.0
1k stars 379 forks source link

Ability to shadowban users #789

Closed vlaho-m closed 3 years ago

vlaho-m commented 7 years ago

I'd like to suggest providing homeserver admins the ability to shadowban one of their local users; that is, set a flag on an account that will cause the homeserver to auto-redact that user's messages from that point in time forward, with the exception that the shadowbanned user still sees their messages as unredacted and therefore cannot tell they've been shadowbanned. This would be extremely useful in dealing with persistent spammers and trolls because unlike current moderation tools (kick, ban, mute), it isn't overt and doesn't signal the problem user that they need to create a new account.

Because this would be a very powerful feature with potential for moderator abuse, and it additionally would have implications for server resource usage (shadowbanned users' messages are still stored and processed, which means problem users can still affect the homeserver), I propose that only homeserver administrators would have the ability to shadowban accounts on a per-server, rather than per-room, basis.

I'd really like the feedback of the Matrix team and our administration and moderation community on this feature request. If there's general agreement that shadowbanning would be useful to have and the semantics of the feature are decided by the community, I will be happy to implement the feature and submit pull requests.

Half-Shot commented 7 years ago

Hey, this is an interesting request and I want to dig into it a little.

I'd like to suggest providing homeserver admins the ability to shadowban one of their local users; that is, set a flag on an account that will cause the homeserver to auto-redact that user's messages from that point in time forward,

This requires that admins be responsible for their users. A spammer could easily (or even, a admin for several spammers) spin up a synapse installation on $CLOUD_PROVIDER and then you'd still have spam, but you'd have to fall back to old tools. You could shadow ban a whole server to deal with it, but then you run the risk of hitting legitimate users as well.

This would be extremely useful in dealing with persistent spammers and trolls because unlike current moderation tools (kick, ban, mute), it isn't overt and doesn't signal the problem user that they need to create a new account.

This is my biggest worry. This is the first time I've seen a feature requested that actively lies to the client. Worse yet, if the user had two clients open, they could clearly see the ban in effect and it's point would be nulled.

I propose that only homeserver administrators would have the ability to shadowban accounts on a per-server, rather than per-room, basis.

This is a devastating change, since it gags the user serverwide, even on rooms which they might be using for legitimate purposes. You'd have to be very very careful and who knows how much havok could be wrecked if a server like matrix.org got compromised. What would be your reasoning behind administrators only being able to do it serverside.

I'd argue that right now it would be more important for tools to limit the number of new accounts coming from the same place. Shadow bans work better when you're in a centralised system, but it becomes a whole lot harder on Matrix.

vlaho-m commented 7 years ago

@Half-Shot, thank you for your thoughtful reply. You've brought up some very good points, and I'd like to address each one as well as I can. I'm about as neutral as I can be on whether or not Matrix should have shadowbans, but opinions on the topic seem to be very strong since I originally brought it up in #matrix:matrix.org. People seem to be either very positive or very wary about the idea; this hopefully means there might be a middle ground variation of this idea that's right for implementation on Matrix.

This requires that admins be responsible for their users. A spammer could easily (or even, a admin for several spammers) spin up a synapse installation on $CLOUD_PROVIDER and then you'd still have spam, but you'd have to fall back to old tools. You could shadow ban a whole server to deal with it, but then you run the risk of hitting legitimate users as well. [...] This is my biggest worry. This is the first time I've seen a feature requested that actively lies to the client. Worse yet, if the user had two clients open, they could clearly see the ban in effect and it's point would be nulled.

You have actually hit on one of the big draws of shadowbanning: spammers and chronic trolls need to invest a lot more time and resources in order to be meagerly successful. They could federate their own Matrix server as you said, which kind of falls out of scope for shadowbanning; federation abuse will one day be a problem for the Matrix community to handle with new, carefully considered administration tools and procedures. They can also run multiple clients, which would require multiple IP addresses (in this world of NATs, shadowbans shouldn't be IP based, but an intelligent administrator can easily correlate IP blocks, login times, and other metadata to catch all of an abusive user's accounts), again raising the barrier to entry for abuse.

Shadowbans also make automated abuse much harder: even with one or more observer accounts, it's not possible to automatically tell a shadowban apart from extreme latency or a federation error. This leads to a huge increase in complexity for a spambot or other automated malicious system, which would have to balance the potential of having been shadowbanned against the cost of unnecessarily burning a Matrix account (and associated email address or other 3PII) and IP address for a shadowban detection false positive.

All that said, you're completely right that shadowbans lie to the client. The very good point has been brought up that to almost every observer, a shadowban is completely indistinguishable from a Matrix protocol error or federation failure. Shadowbans look like bugs, and that might not be the best thing for a network and implementation that still have more bugs than we'd like. And from a social standpoint, shadowbans are definitely more passive aggressive than our traditional moderation tools.

This is a devastating change, since it gags the user serverwide, even on rooms which they might be using for legitimate purposes. You'd have to be very very careful and who knows how much havok could be wrecked if a server like matrix.org got compromised. What would be your reasoning behind administrators only being able to do it serverside.

Considering all the points you've brought up as a whole, it seems you've almost reached my same line of reasoning for shadowbans being server-wide: because it's actually more severe than a normal ban and absolutely shouldn't ever be used lightly, shadowbans should only ever be used on accounts with no legitimate business on that homeserver, or accounts belonging to people whose negative impact on their homeserver far, far outweighs any legitimate business they might have there. Making it a moderator tool or something that can be applied on a room-by-room basis cheapens shadowbans, but makes them no less damaging if used unnecessarily.

I'd argue that right now it would be more important for tools to limit the number of new accounts coming from the same place. Shadow bans work better when you're in a centralised system, but it becomes a whole lot harder on Matrix.

I can absolutely agree that we need better tools to limit registration. One thing the matrix.org homeserver does is verify email addresses, but I don't know if this is used to limit spam accounts and reregistration, or just for identity server 3PII authentication. I think it's also safe to say Matrix will need some kind of federation moderation tools, perhaps sooner rather than later. These things are arguably much more general and useful than shadowbans at this point in time.

Your final point is solid; shadowbans haven't ever really been implemented in a distributed system, as far as I know. I'm almost tempted to set up a small, modified homeserver to experiment with various community moderation ideas (and some Matrix to Web publishing stuff I've been wanting to implement).

MilkManzJourDaddy commented 7 years ago

Actually, IRC, which is "distributed" over many IRCd servers, has had this for quite some time. It is called "Shun" there

Example syntax: /SHUN [+|-]user@host | nick [time to shun reason]

See:

Those mention that there is an option to time the duration of the shun, a.k.a. shadowban.

And it is done enough on other services, that there is a Wikipedia Article: https://en.wikipedia.org/wiki/Stealth_banning

IRC has other protections to, and an ordinary IRC channel Operator can set a flag, and mute/"devoice" all below an arbitrary PL, until they can sort out the chaos, and restore order.

But, with Matrix, doesn't seem to be a plan for a hierarchy council of "IRCOp's", as Matrix hopefully won't be carved up like IRC was. And, rooms can federate across all of Matrix. So, this seems like a function for Room Admin's (PL-100s), as that is really where the regulation of SPIM/SPAM and other abuses happens. There is no telling if those in control of a HS could be reached, or if they care.

A controller of a HS might need to free up aliases, intervene in a room that is primarily on their HS, or regulate a MXID on their HS. But as far as ordinary regulation of abuses, that is within a room, the way Matrix, and it's "Kitchen Sink" seem to be laid out.

It would be great if tools like these were not needed. But they are. Better to allow a timed-duration, as people might change their ways.

locutis-of-borg-1999 commented 7 years ago

I thought I saw on Freenode #guardianproject a while back a constant thorn who had a mobile client or tether with land ip addys. They kept pounding the channel but _hc or n8fr8 were polite. Sometimes it was related. Other times ranting. Either way it seemed a simple ban would not have worked all that easily with them hammering the chan. A /shun sent each time they popped up for a period wiuld have been great. Maybe matrix dont use banmasks where other users of the ip or nick would be affected. But it does take wildcard bans. So if a whole range of users or servers needs to get a time-out. Yah!

Half-Shot commented 7 years ago

I entirely forgot about this thread. Aaanyway, my current thinking is that this builds into the reputation system which is a magical device that does not exist yet but seems to be at the center of quite a few problems for Matrix. At any rate, the problem with shadowbanning is it's just quite ugly to be honest and I think both me and @vlaho-m have covered all the pros and cons of that, because there are pros and cons to it and I don't believe it's neither the worst possible idea nor what I think is best for Matrix.

At any rate, my ideal system would be reputation where users that are likely to spam are already tagged as such and are unlikely to ever be viewed by your client. Without retreading ground that has already been talked about in other threads, being able to flag users as 'trouble' on a decentralised system would be more ideal for several reasons:

MilkManzJourDaddy commented 7 years ago

The fact remains that this has been done on other distributed systems. And Moderators are expected to do their job. And regulation of room traffic is up to the Admin's, Mod's, and Custom_PLs. It is within their bounds. Server maintainers get involved when rooms get abandoned, with aliases needed elsewhere, or when MXIDs on their HS are being used for abuse. But, as often seen on IRC and XMPP, those who control servers do not react swiftly and appropriately, and often not at all.

When abuses are persistent, those trying to do decent work, are constantly defending against nagging thorns.

This has been a common model, in many distributed systems. There are other similar examples, like Usenet Death Penalty, and Cancelbot, et cetera. But, perhaps the closest to Matrix is IRC /shun.

There might be threading coming, but now with linear, it is as it stands.

It might be possible to do a shadowban/shun for a fixed time period, to allow evaluation, where the room Admin's/Mod's/Custom_PLs see the posts, where some might be "approved", as seen on other services.

t3chguy commented 6 years ago

Shun is easier to implement on irc in that people can't spin up their own servers, here if shun was a thing their server would receive the state which said to shun the user. Unless the shun was to shun their entire server.

Half-Shot commented 6 years ago

I suspect one of the potential things we could do is just offer client's a recommended ignore list which would end up functioning the same way, similar to the Twitter ban list thing.

Half-Shot commented 6 years ago

(This doesn't help you with state however, but that's a whole different kettle of fish)

ara4n commented 5 years ago

More notes from @eternaleye on this area over at https://xenforo.com/community/threads/suggestion-a-more-graduated-ignore-system.33542/

ara4n commented 5 years ago

this is fairly linked to #2313

eternaleye commented 5 years ago

To add some context that was discussed in chat at the time (Matthew is @ara4n, eternaleye is myself):

\ the "fascinations of the negaverse" idea probably works better in a bb/forum context

\ as once hellbanned on a chat system, the chances of the user spouting redeemable content is pretty slim.

\ Hm, I'd guess that depends on what the scopes of things are

\ Consider room X, which has hellbanned user A. Let's presume that we also have users B and C, who are in rooms X and Y with user A. Their clients could alert them to that A is hellbanned in X (perhaps via something flair-like), and they could decide whether to peer in on them in X, or propagate a matching ignore to Y, according to taste

\ This takes advantage of the room functionality that Matrix has and fora do not

\ Giving people an opportunity to "adopt" a room-level hellbanning into a user-wide ignore, or counteract a room-level hellbanning based on what they see in other rooms.

I've lightly edited my longest message for clarity - embarassingly, I had some semantically-meaningful typoes and omissions. The original content:

\ Consider room A, which has hellbanned user X. Let's presume that we also have users B and C, who are in room Y with user A. Their clients could alert them to that A is hellbanned in X (perhaps via something flair-like), and they could decide whether to peer in on them in X, or propagate a matching ignore to Y, according to taste

richvdh commented 3 years ago

I'm going to close this now that https://github.com/matrix-org/synapse/pull/8028 has landed.