JamBrain / JamBrain

The software powering Ludum Dare game jam events
https://ldjam.com
MIT License
487 stars 80 forks source link

Comment Spam #219

Open mikekasprzak opened 8 years ago

mikekasprzak commented 8 years ago

Thanks to Joseph (last name withheld) for helping dig this up.

A user steph88 is not a bot, but may as well be.

Hi, nice game and congratulations ! Simple mechanics that still offered a decent amount of challenge.We're realizing a video with several games of the Ludum Dare #36. We made the same thing at the previous jam.

Can you add your game on REDACTED ? (it's free) So we can include also your game in the video ;) p.s. write #LDJAM in the game's description.

As things stand, there really isn't a policy against this, it's just uncool. Very uncool.

Given our size, flagging comments is something that needs to be addressed. I don't want LD to devolve in to Reddit, with its Up and Downs, so Flagging must be a special case. It needs to be sort-of difficult to do. Care must also be taken so it isn't use for censorship (negative criticism). It should only ever be used and acted upon in the case of abuse (hateful things against others, and cases similar to the one above).

One flag shouldn't be enough to remove a suspicious comment, because of the possibility of abuse. That said, some people will, whether we like it or not, use flagging to mark comments they don't like.

That said, it's probably worth considering implementing local hiding. When a comment is flagged by a user, hide it and its children from them. When a comment gets enough flags, it gets removed for everyone. Where that fits in is TBD (a Flag-Count field on every comment?).

We can't rely on simple metrics like "5 flags" anymore, so we will have to go with trust scores.

mikekasprzak commented 8 years ago

Like blocking people on Twitter, it's probably a good idea to ask for a "flag reason". Options could include.

If someone wants to be picky about the comments they see, they should be allowed to be.

Also kid friendly mode, if it becomes a thing, is going to need data to know what comments are/aren't safe. It's not a foolproof system. We can't promise to protect minors from inappropriate language or themes, but we can do a little better.

What each truly means is TBD. Something like "I don't want to see this" could have a score of 0, but offensive/spam could be worth 1. That said, I'm not entirely convinced this is a simple numeric system anymore. Inappropriate for kids is just a flag, not something that removes a comment.

mikekasprzak commented 8 years ago

Implementation wise, this may make the most sense as a modification of Karma. No new tables need to be added, just another field and index. You're limited to setting one state per comment.

The majority of these are negative states, but could be assigned to an address space of a signed 8-bit integer, and spaced out accordingly depending on severity (offensive and spam being the worst).

Author Karma could become part of this system. The above new karmas being negative values, regular likes being positive, author karma and/or other fancy karmas being worth more than that.

Off-hand I'm thinking applying a 3-bit shift, meaning each severity level has 8 possible choices, and 16 possible levels (i.e. 4 bits, with the last bit reserved for the sign). Might not need 16 levels though, but 16 choices per level may be more important.

Probably need a graph.

sgstair commented 7 years ago

I think instead of fully automated removal, flags above a threshold should be added to a moderator queue. Maybe they get temporarily removed at some level, but a human should have the final say.

mikekasprzak commented 7 years ago

One of my goals with the site is to remove the need for moderators (I'm my experience mods have a "life expectancy", before they mode on to other things). So you may see mention of a number of system that implies a hidden system of trust. There are of course cases that require an executive decision, but 99% of cases are not this.

local-minimum commented 7 years ago

I think any list of options should include the ability to flag things as criminal/harassment, but in general I agree that the best thing is if we build the site so it encourages good behavior in general but I would also feel more comfortable if we had good venues for reporting serious problems. It could also be a point in reaching out to some of those who have experience with (being) victims of online abuse for how to think about good and safe design.

sgstair commented 7 years ago

The flip side problem to consider is the potential for abuse by flaggers to harass people or ideas they don't like. This is why avoiding human moderation is an idealist dream, not currently grounded in reality :) It will mostly work well until it really doesn't. At a minimum we need to have a dashboard where automatically taken actions are reviewed on a regular basis - and by the time the action is taken in such a system, the damage is done usually.

I also reject the notion that victims of online abuse have any better idea about what constitutes a safe environment than someone just thinking through the problems rationally. And they're much more likely to be interested in unhelpful draconian overreach as a result of their experience.

mikekasprzak commented 7 years ago

If this wasn't a niche platform for creators, I'd agree with the impossibility of automation. But we are niche, averaging 5000 active users during an event. I could see us getting as high as 10k once things are smoothed out, but unlikely more than 20k at any one time.

Because a line is drawn between those that made something and those that didn't, we avoid a significant number of problems. We also have a higher number of people that actually understand what they're talking about, unlike a sub-reddit or forum. Not a lot of things work this way. We quite literally have a vetting process, and it's participation.

Of course there will always be troublemakers, and I'm not suggestion no moderation tools. But only the most conniving abuse be missed. A yelling match is anything but subtle.

local-minimum commented 7 years ago

I'm not really trying to suggest that those who have suffered abuse would be better per se in figuring out how to design or not design safe online experiences. I'm however quite convinced that it is important to have a diverse group of people with different backgrounds and experiences involved in discussions about such system. And I guess I want to hold LD to higher than average standards with this regard. Because game making matters to me, and games and tech in general is a quite toxic place. So, even if I have zero negative online experience myself, I'm going to take the annoying questioning position until proven that I can relax.