LearnersGuild / game-prototype

Lightweight, minimal implementation of game mechanics for rapid experimentation and prototyping.
0 stars 0 forks source link

Relative contribution impacted by relative accuracy #146

Closed shereefb closed 7 years ago

shereefb commented 7 years ago

Critical Goals

!Learner stats match our subjective reality, elicit honest feedback, and promote self-directed, collaborative growth !Game is balanced

Benefits

Lessens the damage to other players from bad actors and reduces the positive feedback loop of being a bad actor. This buffers the "reward" of being dishonest, or careless, and elevates the intelligence of the system.

It works similar to how a prediction market works. If you're bad at predicting, you have less "currency" to predict with and are able to influence predictions less. Until you get good at predicting.

Description

We calculate the relative contribution based on the assessment of the player with the best assessment accuracy. No more averaging (unless assessment accuracy is equal). Just pick the least "biased" person, their word goes with regards to relative contribution.

After relative contribution is assigned, elo is updated, xp is assigned: all players' bias accuracy gets adjusted accordingly, which impacts the next cycle.

If any player on the team has no assessment accuracy (this is their first game), then the results are averaged with all other players for that game.

Implementation/Impact

All stats would need to be re-run with this change. This will change every single stat for every single player!

shereefb commented 7 years ago

cc @LearnersGuild/software ready for review!

bundacia commented 7 years ago

I like this idea. I would suggest that we change it from "least biased person's word is law" to a weighted average so that each person's estimate contributes some to the result based on their bias stat. It seems dangerous to just throw away everyone else's perspectives in favor of the least biased person, especially in cases where everyone's bias was very similar.

jeffreywescott commented 7 years ago

Agree with @bundacia -- weighting isn't significantly harder, I don't think.

shereefb commented 7 years ago

➤ Shereef Bishay commented: I strongly disagree with this one folks. What I want is to completely eliminate the positive feedback loop of a "rogue" input.

Yasseen saying "fuck it, I'll put 10 hours, 80% contribution and see what happens". What happens is, you get ZERO benefit from that move, and your bias gets hurt.

Reducing the benefit from an "asshole" move is good, but completely eliminating it is better.

jeffreywescott commented 7 years ago

Got it, @shereefb -- makes sense, and you know best (being that you're both moderator and seeing things on the floor, as well as Game Mechanics). :)

bundacia commented 7 years ago

I'm worried we're overcorrecting for the asshole problem and creating other problems in the process. If my accuracy is 55% and the other three people on my team have an accuracy of 54% and they all agree on RC and I report something very different, are we really just going to ignore all of their responses (and potentially penalize them by lowering their accuracy stats)? I feel like this needs to be a little finer grained so it can self correct a little.

One other related question, how does this affect the way we calculate the bias and accuracy stats? Are we still averaging the RC values when computing those stats? If not then weird things will happen when you end up being the least biased person in the team. Whatever you say is taken as gospel, and then you're bias stat improves significantly because you were in exact agreement with yourself. Pretty quickly you're gonna get 100% accuracy and you're word will be LAW on every project you work on.

shereefb commented 7 years ago

@bundacia bias and accuracy stats should be calculated in exactly the same way. So after implementing this feature, XP, ELO will change for all players, but bias, accuracy and health stats won't. Otherwise, we have a runaway system like you described with a positive feedback loop where the more accurate I have been, the more the system trusts me, the more future feedback is perceived as accurate, the more the system trusts me.

Most people's accuracy is above 90, and a small difference (between 90 and 93) is significant. We could create a range where two people's accuracy is considered the same. For example if it's within 0.5% of each other, then we average their results. I think that's adding unnecessary complexity though.

shereefb commented 7 years ago

rfi @jeffreywescott

jeffreywescott commented 7 years ago

Issue moved to LearnersGuild/game #680 via ZenHub