Yellow-Dog-Man / Resonite-Issues

Issue repository for Resonite.
https://resonite.com
141 stars 2 forks source link

Profanity Filter for new account creation and name changes #3078

Open Foxxboxx opened 1 month ago

Foxxboxx commented 1 month ago

Is your feature request related to a problem? Please describe.

There have been issues in the past where trolls have created accounts with offensive wording in their name. Recently I reported a moderation ticket for one who had the N word as their username.

Describe the solution you'd like

The solution to this problem would be a profanity filter that checks a username when creating an account or changing their username to prevent a user from using common vulger and/or offensive language. While this would not be perfect, it would prevent people from using specific words straight up.

Describe alternatives you've considered

An alternative would be a filter that flags a username that it thinks contains profanity and or offensive language and sends it to moderation to take a look at and make a decide. Other alternative is leaving as is and letting the moderation team deal with it as it appears.

Additional Context

No response

Requesters

foxbox.

Dusty-Sprinkles commented 1 month ago

To be fair the self report (using slurs as a name) probably makes predicting their bad behavior easier later but auto flagging might not be a bad option

Frooxius commented 1 month ago

We are looking into using Azure service for this at some point, which can provide pretty robust filtering, while also eliminating false positives (the Scunthorpe problem).

@ProbablePrime could provide more info on this one.

ProbablePrime commented 1 month ago

We were planning to use Azure Content Moderation service to handle this.

However, that service just shut down and its functionality has been moved over to a product called: Azure AI Content Safety.

That might however cause some alarm, so I spent some time researching what had happened with the switch over and seeing what parts we could use/need.

Narrowed it down to this small corner of the product: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/quickstart-text?tabs=visual-studio%2Cwindows&pivots=programming-language-csharp

Looks simple enough to implement :)

EDIT: I will spend some more time looking into what Azure/Microsoft can do with the data we pass through it too. If data passed can be used for training AI or something like that, we'd discuss that more closely within the team.

EDIT: https://learn.microsoft.com/en-us/legal/cognitive-services/content-safety/data-privacy?context=%2Fazure%2Fai-services%2Fcontent-safety%2Fcontext%2Fcontext#is-customer-data-used-to-train-the-azure-ai-content-safety-models

GOOD, that makes me happy to use it, Its also cheap to use in this circumstance.

Foxxboxx commented 1 month ago

Just as long as the system isn't overly aggressive. I got GTA V to play with my friends and was disappointed I couldn't use my username in any form because "box" was considered profanity.

ProbablePrime commented 1 month ago

It looks like everything is based on a "severity" score, so its likely we'd keep the severity requirements high to start and we can tweak as we go.

shiftyscales commented 1 month ago

@ProbablePrime - when it comes to testing and implementation of the filter- would it be possible to run it across the existing data set of created account names to get a better sense of what would get triggered at each score? That would help us in tuning it / deciding where that line should be as well as possibly catching out existing cases that hadn't yet been reported.

ProbablePrime commented 1 month ago

Sure we can try some form of that. Likely not the full set but a good representation yes.