Open moan0s opened 8 months ago
I think it would be nice to be able to hand off moderation to different system. That would allow them to be developed and released independently of GtS, and even by other community members, without any tie-in to the GtS development and release cycle. The simplest protocol we could have is shunting the Activity over an HTTP call to a moderation service that then returns a yes/no/hold response. We could further limit ourselves to transferring only a partial of the object, with only content, attachments and tags.
One thing I do think will cause problems is that if we put stuff in a queue for admins to approve, there's no way for clients to get that feedback. If you look at the Masto API, the two defined errors are unauthenticated or unprocessable entity. We can return our own, but how clients display that and react to it is a bit up in the air.
There's also some weird scenarios like someone sending out a response to an ongoing discussion, that getting caught in moderation and potentially only going out hours later. That may or may not be desirable in certain cases, so we'd have to have some way for someone to then say "ah scrap that message entirely" instead.
this is something i plan to work on, not directly, but at least preparing the ground for by writing a lua-based plugin system that's still locked down enough that admins can't say just opt-out of blocks / deletes.
Is your feature request related to a problem ?
There is a spam problem within the fediverse and generally a high moderation effort is needed. Not many moderation tools are developed to effectively combat spam, GoTosocial has only an experimental spam filter. Also spam waves can look different. Therefore different approaches might be appropriate at different times to classify correctly. All spam filters will make false-positive findings. Currently these get dropped without possibility of moderator intervention
Describe the solution you'd like.
One possible solution might be a plugin-based or rule based approach where rules can be activated/deactivated in the web frontend, custom rules can be developed and shared.
I imagine something like reddit's automod that automatically checks all incoming messages (or even all messages even within the instance) and has rules and different actions. Ideally the automod can not only drop or accept messages but can also filter them. Filtering means that a message is not shown to users until approved from a moderator. This could be done in the webfrontend in a tab below "Reports".
An example where this would be useful is when a queer instance receives harassment on a semi-regular basis. Therefore a automod rule is made to filter messages that contain known slurs. Deleting all messages containing slurs would be bad as people need to be able to talk about their experiences. A human moderator will then decide if it is harassment, or normal interaction. If the rule has too man false-positives they can also adjust the rule.
One issue is the privacy as mentioned-only or follower-only messages will either not be checked or moderators will gain access to private messages without a participant giving explicit permission.
Describe alternatives you've considered.
A bot with access to all interactions on the instance that auto-reports possible spam/harrassemtn
Additional context.
I know this is a involved feature that somewhat breaks the moderation schema of e.g. Mastodon. But I feel like GTS could be a project that implements a better and moderation option that makes ethical use of automatic moderation.