Open erlend-sh opened 1 year ago
Sounds interesting.
Something like this could probably be implemented by our existing Role system.
Adding more role types that correspond to "trust levels" (or whatever term we might choose)
What kind of features did you think about gating behind a higher trust level? Maybe full-text search access?
First restrictions that come to mind for 0-trust users:
I wouldn't want to restrict search for anyone, because it's an essential tool for learning. It's how you look up potential duplicates for what you want to share, people you wanna follow etc.. But it could definitely be more rate-limited.
Another way to think about a TL0 user is that they're effectively considered a Bot, so they can be trusted to the same degree a search crawler can be trusted.
Or instead of statically assigning user roles and associating hardcoded logic, we could make this a more dynamic.
On access on certain API endpoints, Kitsune loads user statistics and passes it to a WASM blob that the admin wrote. This blob takes in the statistics and returns the features it wants to allow to the user.
This invocation would be cached for ~30min to not unnecessarily strain the database with the statistics aggregation (I think with something like this it's fine to lag a bit behind).
We can bikeshed over whether this should be WASM-based or something more scripty like Lua but, in my opinion, something programmable would be nice to give admins more control
- Rate-limited posting, maybe 2 per day.
- No images/videos
- No more than one @ mention.
These sound really sensible and sound like they could fix some of the issues existing instances with open registrations face (i.e. bot spam on the local timeline, mention spam, etc.)
Especially the "no images/videos" point could relax some admins/moderators that users don't post illegal media content (or at least make it less likely)
In light of the recent crypto spam, I added ‘No DMs’.
The TL concept seems to be resonating with the fedi community 💖 https://writing.exchange/@erlend/110391232157395456 https://mastodon.social/@dansup/110417912280239064
To counteract disingenuous behavior by new users, consider Discourse's concept of Trust Levels: https://blog.discourse.org/2018/06/understanding-discourse-trust-levels/
Orbit has a similar system, but more focused on onboarding than incremental trustworthiness: https://orbit.love/model
Also related: https://medium.com/the-node-js-collection/healthy-open-source-967fa8be7951 https://www.trustcafe.io/en/faqs