w3c / activitypub

http://w3c.github.io/activitypub/
Other
1.21k stars 77 forks source link

Brigading #423

Open evanp opened 7 months ago

evanp commented 7 months ago

One of the biggest concerns on the fediverse is brigading, that is, coordinated harassment campaigns to terrorize or silence someone on the social web.

The technique is similar to a DDOS but at the social layer. Having enough humans make harassing comments or threats overwhelms the individual's ability to block users and moderator's ability to block domains.

Brigading is a social phenomenon, but there can be technical measures to help mitigate an attack, such as:

It might be interesting to collect best practices in software and moderation tools to mitigate harm from brigading.

evanp commented 7 months ago

Blocklists and the #fediblock hashtag have been useful on the fediverse for this problem so far. In particular, for servers that were set up specifically for mass harassment.

evanp commented 7 months ago

Greylisting might also be a good technique here. (I'd appreciate any guidance on a better term for this technique. "Delay listing"?)

This is a technique for temporarily rejecting messages to force the sender to retry later. It's used in email servers to slow down spam attacks, with the assumption that spam software won't bother with retrying.

It may slow down delivery by harassers, but I'm not sure how it's better than just rejecting delivery permanently.

evanp commented 7 months ago

Shadow banning is another technique that might be helpful. Showing a brigader their comment or reply in a list of replies, even if no one else can see them, might get them to stop posting.

This seems less efficient in e.g. Mastodon, where blocked users' replies are shown anyways.

strugee commented 7 months ago

It may slow down delivery by harassers, but I'm not sure how it's better than just rejecting delivery permanently.

(Can't tell if this is a question on how greylisting works in email, or if it's just thinking out loud about how it would work in an AP context; feel free to ignore this paragraph if it's the latter.) In email, the idea is that you're going to apply greylisting very widely, to either messages remotely suspected of being spam or even just to all messages. If it's actually delivered that's one signal in favor of the message not being spam.

Greylisting because of the economics of sending email spam at the technical level, but I don't think it'll be very helpful here because the basic threat model is users on (from a software perspective) "ordinary" instances doing this at the social level. Those ordinary instances will presumably all have retry mechanisms as normal. Also presumably money isn't really an issue because the goal isn't to have higher profit margins from the spam, the goal is harassment for the sake of harassment. So even if technically-minded users are running custom software they're likely to not be deterred by the need to requeue deliveries.

strugee commented 7 months ago

Another technique I've thought of in the past: block interactions with anyone who wasn't already interacting with you before a certain datetime (i.e. before the attack started). This would basically let you use your existing social graph like normal, but would prevent your social graph from expanding to anyone potentially malicious. Not sure if this is a separate technique or just a potential implementation for a "shields up" mode.

ThisIsMissEm commented 7 months ago

Shadow banning is another technique that might be helpful. Showing a brigader their comment or reply in a list of replies, even if no one else can see them, might get them to stop posting.

This seems less efficient in e.g. Mastodon, where blocked users' replies are shown anyways.

Shadowbanning has historically heavily affected minority communities on platforms, and been massively harmful. I would strongly advise against using shadowbanning as a tool

bumblefudge commented 7 months ago

block interactions with anyone who wasn't already interacting with you before a certain datetime (i.e. before the attack started). This would basically let you use your existing social graph like normal, but would prevent your social graph from expanding to anyone potentially malicious.

i'd love to see any whitepapers or specs or docs on this kind of UX/design approach if you have them! surely SOMEONE has implemetned something analogous?

evanp commented 7 months ago

Shadowbanning has historically heavily affected minority communities on platforms, and been massively harmful. I would strongly advise against using shadowbanning as a tool

Really? I mean, in this particular situation, where one user is being brigaded by a mob of other people, are you sure that the technique of showing one of the harassers their own post, but hiding it from others, is harmful to underrepresented people?

I know that "shadowban" is also used for other similar techniques, like downranking people in algorithmic feeds, and I can see it being a problem there. But in the particular case of a brigading attack, it seems like a low priority. Help me out if you can!

https://en.wikipedia.org/wiki/Shadow_banning