Closed Beethoven-n closed 1 year ago
Given the time taken to layout these thoughts into what appears to be constructive criticism, it seems only fair the same time and effort is returned on addressing them and the reasons for some of the current decisions. Makes a nice change, I've personally become quite closed regarding suggestions like this due to the attacking nature of most of them.
The rules can never be specific, someone will twist and misinterpret them, they always need to be flexible enough for us to take the context of the messages and also the overall 'atmosphere' of the community at the time. A good example is 'Ok Boomer', a popular meme, as cited by yourself in 2020-2021.
This was fine at first, until a considerable portion of the chatters started to use it as a weapon to dismiss the views of their older peers, while meant in jest, we find it easier to operate a rule of 'no terms used to demean any specific group of people' policy than police them individually. This is an operational choice rather than a specific viewpoint, we have finite resources, the phrase was added to the bot and then the individual context was reviewed upon appeals.
This same reasoning is applied to a lot of situations, we do not allow sexuality to be discussed, in either direction, it's a difficult landscape and while a lot of people are wrong, they wholeheartedly believe they're correct in their views, and will fight for them just as much as the other side, and MineTogether is not a place we want that battle to take place, this is peoples rights.
People may not agree with this view, but that is fine, we do not force you to use the platform and we just want everyone to be able to play together without insulting each other (even in jest or as part of a popular meme).
It'd be very hard to quantify this viewpoint into a set of hard-and-fast rules, the moderation policy is complex, and honestly personal views do make their way in, the only thing we can promise to do is try and be fair, we won't be always, but I'm going to address that further too in the future changes to the appeal system.
Previously, the system just used WebPurify's profanity system wholesale. Recently we started using Natural Language Processing along side the profanity filter, to try and provide automated context to messages, but this still only takes into account the individual message, not any previous messages or the context of the current chat, and will never be perfect because of this, but it is substantially improved from previously, including now giving ban messages that explain what was detected.
As mentioned previously, we do need to provide feedback on our ban appeals system, but this is delayed behind remaking the MineTogether website, which is unfortunately a low priority at this time, getting the mod functionality working proper is the highest priority in the project right now.
Many times, people ask say "just mute", now honestly, I'd love to, and we have ways to do it, the problem we found early in the projects history is, anything that isn't a hard ban resulted in people testing the system to find ways around it while there was no human moderators available. I'd rather have a chat where the positive contributors have to assist us moderating by providing the odd appeal to tell us the bot (or ourselves) got it wrong, than allow a wrong-doer to have unfettered access to the community for potentially hours at end.
These are things I have chosen, my 'requirements' to put our time, energy, name and money to this project, which has grown in scope and complexity beyond anything originally imagined. (Many of us moderate outside our paid shifts, because we love the chat/community)
The MoTD suggestions are all correct, and the easiest solution is a checkbox agree that captures the initial MoTD and doesn't allow further messages in the chat until it's agreed, but this comes back to stable code-base to allow backporting, as I would rather provide consistent experience where possible to the end users. (The LMGTFY link was meant to also let people know it's okay to still have a bit of fun, seems it didn't work out that way.)
That said, I am starting to ease up on my demand for consistency, including potentially looking at UI changes to break away from trying to remain consistent with Mojang GUI choices, as some of our data just does not fit well into it.
I do hope this at least explains and gives you hope for future changes, even if it does not fully address the problems you see.
I get the feeling this issue might take a bit to address, so feel free to
The rules can never be specific, someone will twist and misinterpret them, they always need to be flexible enough for us to take the context of the messages and also the overall 'atmosphere' of the community at the time. A good example is 'Ok Boomer', a popular meme, as cited by yourself in 2020-2021.
This was fine at first, until a considerable portion of the chatters started to use it as a weapon to dismiss the views of their older peers, while meant in jest, we find it easier to operate a rule of 'no terms used to demean any specific group of people' policy than police them individually. This is an operational choice rather than a specific viewpoint, we have finite resources, the phrase was added to the bot and then the individual context was reviewed upon appeals.
I do think that there is room for specifying a tiny bit more. For example, the MOTD only specifies "no discriminatory terms," rather than the full "no terms used to demean any specific group of people." All I really want to change is for the MOTD to present the current rules as enforced, rather than for any specific rule change. More documentation. Other community spaces tend to have things like "no racism, sexism, queerphobia, misogyny," and so on.
This same reasoning is applied to a lot of situations, we do not allow sexuality to be discussed, in either direction, it's a difficult landscape and while a lot of people are wrong, they wholeheartedly believe they're correct in their views, and will fight for them just as much as the other side, and MineTogether is not a place we want that battle to take place, this is peoples rights.
This is actually something that I agree with! This is a good bridge to talk about the rules on Discord, as well. For example, there is no explicit ban on NSFW material there, even though the space is intended for children to attend.
It'd be very hard to quantify this viewpoint into a set of hard-and-fast rules, the moderation policy is complex, and honestly personal views do make their way in, the only thing we can promise to do is try and be fair, we won't be always, but I'm going to address that further too in the future changes to the appeal system.
To be honest, I think that a more specific set of rules would help the community to know the boundaries of the space better. For example, "no swearing" leaves a lot to interpretation. With words like "crap, damn, God," and even phrases like "shut up" being considered swears in different communities, a different list of swears appears in "common sense" for every person. Yes, that usually does include your "fucks and shits," but what counts as a swear does need to be qualified for the sake of boundaries. If someone tests the boundaries after being explicitly told what they are, then ban away! They deserve it for that.
Previously, the system just used WebPurify's profanity system wholesale. Recently we started using Natural Language Processing along side the profanity filter, to try and provide automated context to messages, but this still only takes into account the individual message, not any previous messages or the context of the current chat, and will never be perfect because of this, but it is substantially improved from previously, including now giving ban messages that explain what was detected.
This is actually good context, and does deserve to be added as a line to the MOTD. "Moderation is automatic, and messages will be read for violations." The flaws of the model don't need to be discussed, of course.
Many times, people ask say "just mute", now honestly, I'd love to, and we have ways to do it, the problem we found early in the projects history is, anything that isn't a hard ban resulted in people testing the system to find ways around it while there was no human moderators available. I'd rather have a chat where the positive contributors have to assist us moderating by providing the odd appeal to tell us the bot (or ourselves) got it wrong, than allow a wrong-doer to have unfettered access to the community for potentially hours at end.
These are things I have chosen, my 'requirements' to put our time, energy, name and money to this project, which has grown in scope and complexity beyond anything originally imagined. (Many of us moderate outside our paid shifts, because we love the chat/community)
I think this is why clarifying the boundaries that the space already has is very important. The automatic moderation is good enough that anyone who decides to test the boundaries after being explicitly told so is going to be footgunning themselves by trying, so ban em! They deserve it, especially if the existing rules are fully codified.
The MoTD suggestions are all correct, and the easiest solution is a checkbox agree that captures the initial MoTD and doesn't allow further messages in the chat until it's agreed, but this comes back to stable code-base to allow backporting, as I would rather provide consistent experience where possible to the end users. (The LMGTFY link was meant to also let people know it's okay to still have a bit of fun, seems it didn't work out that way.)
I'm not sure that's really in scope for this issue, but I guess it would be appreciated? Really what I want here is for the content of the MOTD to be changed.
I do hope this at least explains and gives you hope for future changes, even if it does not fully address the problems you see.
It was good to get some transparency, but maybe I should've clarified better what I actually wanted changed? I do hope the issue can move forward, even if it's just to a #TODO somewhere.
I'm not asking for any changes to the existing rules, just for the rules as written to appear less as a vibes-based system, and more as a set of boundaries for users not to cross. A "watch your step" sign, if you will. Ideally, this is what any good rules list hopes to do. If someone decides to test clearly defined boundaries, then they obviously deserve to be moderated over it. All that needs to be done is for those existing boundaries to be more clearly defined. Thanks for taking the time to listen, and to reply!
I also think it's kind of hypocritical to have rules about what is allowed to be said, and then quote something that wouldn't be allowed to be said in chat.
Specifically, "Follow Wheaton's law" and the LMGTFY link explaining to explain the law is "Don't be a Dick."
The Let me google that for you link IS condescending. The whole purpose of that site is to be condescending.
Referencing something as a rule that users wouldn't be allowed to say is hypocritical, as I'm sure someone even just answering the question "What is Wheaton's law?" posed by someone that couldn't or didn't want to open a link inside the game would get censored and/or banned.
If someone has to censor themselves and/or modify words to explain or say what a rule is, then it's not a good rule. If I can't say "Don't be a dick" in chat, then it shouldn't be referenced in the rules.
Is your feature request related to a problem? Please describe.
There's a plethora of problems related to the way the rules are presented in the MineTogether chat, but I'm sure a list will suffice:
Describe the solution you'd like
Think from a user perspective. What discussions are expressly permitted or prohibited in the chat? What would a parent want to know that their kid is going to experience on this platform?
Are there any current issues you don't want discussed? Tell me! I'm sure my ban ~2 years ago wasn't 100% about "expletives," given the political nature of even mentioning generations. If I'd known from some kind of community post, blog, or news post somewhere, I could've avoided mentioning something that was expressly prohibited, and avoided a ban. The same could've gone for the entire platform.
Describe alternatives you've considered
No response
Additional context
Note that the rules do not have to be revamped! What I mean is that the moderation policy should be written down! Take the vibes you currently have, and put them to paper. That way, the userbase gets to know the boundaries of your space without having to test them. Also, providing a lmgtfy link in your rules page is a massive pain point. The internet has grown. People have changed. People don't want to be condescended in that way anymore. People shouldn't have to google the rules of your space, and they sure as hell shouldn't have to be insinuated that they're stupid for not knowing the rules when they aren't properly qualified.