Twentysix26 / x26-Cogs

General purpose cogs for Red V3
GNU General Public License v3.0
41 stars 29 forks source link

[Feature request] Defender/CommentAnalysis - Enable Providing Comment Score Feedback / No Delete Action #32

Closed Grommish closed 2 years ago

Grommish commented 3 years ago

Cog

Describe the feature you'd like to see When using CommentAnalysis, Threshold limit has to be set to 98%+ to avoid false-positives. This is not a Defender issue, however, in order to provide feedback to the AI API to correct these false positives, I am requesting 2 items

First, the ability to Notify but Not Delete items that trigger the threshold limits for CommentAnalysis. Currently, Action None actually Deletes the Message in addition to Logging it.

Second, the ability to submit Feedback scores from the reported message embed, as outlined in the API Docs under Sending feedback: SuggestCommentScore

This will allow us to submit corrections for false positive in context, while alerting the staff to actual TOXIC/INSULT/THREAT messages..

Twentysix26 commented 2 years ago

Message deletion toggle done in 1.8. I don't think the other suggestion is a good fit, as all messages get sent with the doNotStore attribute

Grommish commented 2 years ago

Message deletion toggle done in 1.8. I don't think the other suggestion is a good fit, as all messages get sent with the doNotStore attribute

Perhaps this could be a togglable option as well?

Does Perspective store comments after they are scored?

It is up to you. You can choose to have comments stored to be used to improve future models or you can enable an option which will automatically delete comments after they have been scored. Anyone using the Perspective API is covered under the developer terms of service. Check out our API methods and the doNotStore option for more information.

My issue is that my Discord has 10k+ Medical Students, so the topics and "natural language" are almost diametrically opposed. Being able to correct the AI to better be able to identify mis-scored comments is the goal.

For example: Someone talking about Parasites in a medical context triggers Toxicity at like 95%.

I'll rebase locally and continue to play. If I come up with something worthwhile, I'll let you know and decide if you want to mainline it or not. I respect that what I need isn't for everyone!

Twentysix26 commented 2 years ago

It's unlikely I'll ever add it due to privacy concerns, users' messages would be sent and stored by a third party without their explicit consent.
I understand what you're trying to do but I suggest going for the easier route and only analyze messages of new users: it's a good tool but there is the occasional false positive, especially when messages are analyzed without context

Grommish commented 2 years ago

I understand. Our goal would be primarily to teach the AI rather than privacy concerns over content on a publicly available educational server without private areas, but I know it will certainly be a hot-button topic for MOST Guilds. I figure, worse-case would be that I'd get it to work and in the unlikely event someone else needs it, it'll be avail.

I appreciate all the hard work you and everyone involved with the project do!

Edit: Thank you also for the heads up on the doNotStore flag!