comment-anything / proposal

Comment Anything Browser Extension
MIT License
0 stars 0 forks source link

Moderation #4

Open Bkrenz opened 2 years ago

Bkrenz commented 2 years ago

We've briefly discussed Moderation in regards to comments. Here's a couple related thoughts to this.

First, the flagging of comments for review. This could involve a mix of community flagging, blacklisting certain words or phrases, and a stretch feature might include some machine learning or another service for automating the process.

Second, as for manual review of reported content, there's perhaps the idea of community volunteers. A certain percentage of registered users with clean histories of a certain age could be given the option to participate in review of comments.

klm127 commented 2 years ago

Moderation is critical to this endeavor having any chance of success. It is the difference between good comments and bad comments.

I believe that the key to good moderation is promoting free expression of ideas without vitriolic content. In other words, users should be allowed to express all ranges of opinions if they do not spam and viciously personally attack each other. Profanity disallowed, of course, but also aggressive and excessive behavior. (Or at least, possibly causing mutes on certain pages or comment-delay times).

I propose that in addition to having moderators to review flagged content, we employ sentiment analysis on comments and automatically comment-delay, warn, or flag comments that fail to meet our AIs standards. Some comments could be hidden unless logged in with corresponding setting.

These standards WOULD wind up being arbitrary, but there would be a sort of equal opportunity arbitrariness to it that would funnel comment behavior into more positive spirits and make for more creative insults as commenters cleverly evade the algorithm.

We will need to weigh the processing cost of this sort of analysis against reduced moderation costs. (Even volunteers require time and oversight.) As long as we are performing the analysis we could save the data and gain marketable insights into comment behavior and consumer sentiment.

bedekovich commented 2 years ago

I think that the best way to tackle moderation would be to have some sort of algorithm sort and flag the comments specific words, but depending on resources can have it evolve into detecting tone and intent like we discussed in class. That way we dont run into an issue where we say in the doc's we are going to use an ai when we dont have the resources for it.

Once an comment is flagged there needs to be some sort of human review weather it be us, volunteers or payed moderation. I dont see a world where we go fully computer automated reviewing as that tends to not work properly even with major company's like YouTube for example where every other comment is a scam website. I think for the initial model for the project having volunteers review the content is the best option as we wont have the resources to get payed moderation out of the gate but in the case that we cannot get a large enough initial volunteer pool we can manually review comments along with what volunteers we can get since this is just a senior project at the base.

I would also like to propose a different method in what happens to the comments that are flagged and human-reviewed. I think instead of having comments just completely removed outright as the only option we could also have an option to hide the comment instead. When a comment is flagged to be hidden it isn't immediately shown to the user but instead the user has to click on the comment once to see it instead of it being outright removed. This can be useful for when there isn't a comment that is directly harmful like scam links of violence to others but is either extremely negative or disliked. If you want an example of what I'm talking about you can look at how odyssey does it's comments when there are enough downvotes on them.

From what I can tell moderation is a bit of a hard question to tackle because resources seems to be the main factor in what we can or cant do. In a ideal world we would have an perfect flagging algorithm and enough funding with payed moderation that it wouldn't be an issue.

But to keep it within the scope of the project, I think we should go with a simplified algorithm for the flagging unless time allows for us to make an more advanced one, either just volunteers or us and volunteers for the manual review and comments handled with both removal and hiding.