LearnersGuild / game-prototype

Lightweight, minimal implementation of game mechanics for rapid experimentation and prototyping.
0 stars 0 forks source link

Pool & Project formation (v1) #13

Closed jeffreywescott closed 8 years ago

jeffreywescott commented 8 years ago

Overview

  1. Players ELO is calculated. LearnersGuild/game-prototype#9
  2. Pool Formation
    • Moderator initiates cycle
    • Different voting pools are formed and players are assigned to them
    • Players vote in their separate pools
  3. Project Formation
    • Moderator launches cycle
    • Project formation algorithm places players within each pool on projects and distributes advanced players from within the pool to the projects

      Pool formation

Pools are formed when moderator initiates a cycle. The moderator passes a pool threshold config file to /cycle init which defines:

  1. The number of different pools
  2. The Elo thresholds between pools
  3. Advanced players assigned to the pools

For example the config file could define:

Pool Elo min Pool Elo max Advanced Players
0 1000 @bluemihai, @jaredtron
1001 1100 @shereefb, @tanner
1101 9999 @carla, @needdra

Pools get formed and given names (or ids), and players are informed of which pool they were assigned to. Advanced players are also informed of their pools but they are not treated any differently at this point.

Note: The moderator will need a log of players that shows how many times they were assigned as an advanced player in previous cycles so they can author the pool threshold config file in such a way that non-paid advanced players rotate.

Project formation

Once a cycle is initiated, and players are informed which pool they belong to, players can now vote within their pools. They can only see goals and votes by other players in their pools.

Nice to have: Moderator should have a way of monitoring different voting pools.

Once votes are in the Moderator launches the cycle /cycle launch passing a team threshold config file that defines Elo bands which constrain the maximum number of concurrent projects a player can be on.

For example, the team threshold config file could dictate:

Elo Threshold Max Concurrent Projects
1000 0
1100 1
1200 3
1500 4

This means that a player with an ELO rating less than 1000 cannot be assigned as an advanced player to a team.

A player with an ELO rating of 1150 can be an advanced player on upto 3 concurrent projects.

Project Formation Algorithm

Teams are formed only by players already within their pool. There is no cross-pool poaching, borrowing, lending, or anything like that.

Constraints

Nice to have optimizations (in order of priority)

  1. Minimize the number of teams a player is on
  2. Maximize team novelty: put people in new team configurations
  3. Minimize maximum number of times any of the players was "shafted" (not assigned to their first or secondary choice in previous cycles)
  4. Minimize difference between advanced player and average Elo in rest of the team
  5. Maximize number of different goals that are selected

Note: This list is huge. We probably won't get to all of it in this epic. First four are really important, next 3 are really nice to have, last two are ponies and rainbows.

Game mechanics open questions / issues

shereefb commented 8 years ago

@tannerwelsh , @prattsj here's my first stab at objectives and constraints for V1 algorithm. Ready for review!

tannerwelsh commented 8 years ago

Create rating "pools" for players to vote in.

Another way to do this (not as restrictive, but simpler) would be to sort goals by difficulty (using the milestones we already have, for instance) and ensure that players can only vote for goals that are at or below their level.

In other words, it keeps beginner players from getting stuck on an advanced team, and allows advanced players to "take an easy week" if they like.

tannerwelsh commented 8 years ago

Calculate and use ELO rating to rank and identify player skill instead of ECC or XP

👍 posted in Slack already, but here are some NPM Elo packages: https://www.npmjs.com/package/elo-rank and https://www.npmjs.com/package/elo-rating

tannerwelsh commented 8 years ago

If I were to prioritize the nice to have section, this is how I'd do it:

  1. Take into account how often someone was "shafted" and didn't get their first choice, and prioritize them getting their first choice
  2. Take into account team novelty and (all other things being equal) put people in new configurations
  3. Optimize for largest difference between advanced player and average elo in rest of the team

Also, changed "delta" -> "difference" in the last one.

jeffreywescott commented 8 years ago

Notes from 1st Design Review Meeting

How it works

  1. Players ELO and contribution are calculated
  2. Pool formation
    • Moderator hits cycle init (with thresholds)
      • /cycle init 1050
    • Pool formation algorithm: Magic happens
    • Pools get formed
  3. Team formation
    • People vote within their pools
    • Team formation algorithm: Magic happens
      • Biggest changes:
      • Not every advanced player can work on multiple teams
      • Advanced players votes are taken into account
    • Teams get formed:
      • /cycle launch

Game Design open questions / issues

UX open questions / issues

Potential implementation risks

Example

Pool 1

|John Roberts | 1297 | |Jared Grippe | 1275 | |Majid Rahimi | 1068 | |EthanJStark | 1058 | |Aileen Santos | 1058 | |John Hopkins | 1057 | |Shaka Lee | 1056 | |James D Stewart | 1055 | |Nico | 1051 |

Pool 2

(moved from pool 1)

|Mihai Banulescu | 1215 | |Devon Wesley | 1072 |

(original pool 2)

|Rachel | 1045 | |Phillip Lorenzo | 1041 | |anasauce | 1018 | |Ej | 1013 | |Harman Singh | 990 | |Yaseen Hussain | 960 | |Syd Rothman | 950 | |Moniarchy | 935 | |Thomas W. Smith | 923 |

shereefb commented 8 years ago

@tannerwelsh I prioritized the list of optimizations, but suggested a "reverse" order of the list you suggested. Let's have the discussion, I'm curious to hear your thinking behind your ordering.

shereefb commented 8 years ago

@LearnersGuild/los this is ready for another review. Would love to hear your thoughts, objections, open questions, better ideas....etc.

tannerwelsh commented 8 years ago

thanks for the updates @shereefb - I'll review them in a bit.

quick thoughts about the prioritization in https://github.com/LearnersGuild/game-prototype/issues/13#issuecomment-239182038 (I don't need a discussion, my opinions are simple and not that strongly held):

bundacia commented 8 years ago

@shereefb I love the way you're using a config file to punt on some of the hard AI and give the moderators a way to experiment with fine grained control. A couple of questions:

  1. How would you feel about using YAML to allow more flexibility in adding config options that aren't always threshold-specific (like how to weight different optimization priorities in team formation, etc).
  2. Instead of uploading locally stored a file via a slash command (which would be tricky to do in echo), how would you feel about passing a URL to the file as an arg to the command. You could just drop the config file in a gist for instance.
  3. What does Number of Concurrent Projects indicate? Is it the max number of concurrent project an advanced player can be on, or the max number of project that can be formed in that pool?
  4. I'm not sure I understand why the two files specify different ELO ranges. Can you explain the difference? I assume the ranges in the pool threshold config file specify the ranges for pools, but what are the ranges in the team threshold file for? Could this all just be in one file like this?
Pool ELO min Pool Elo max Advanced Players Number of Concurrent Projects Max Team Size
0 1000 @bluemihai, @jaredtron 1 4
1001 1100 @shereefb, @tanner 3 4
1101 1200 @shereefb, @tanner 4 5
1201 9999 @carla, @needdra 5 6
bundacia commented 8 years ago

@shereefb, more questions:

(1) One of the optimizations listed is:

  1. Minimize teams of size 2

Does that apply even if the recommended team size for the goal is 2? If not, how is it different than

  1. Maximize number of team sizes that are the same as the goal’s recommended team size

(2) Just want to confirm that this list is in priority order. For instance, assume we are able to incorporate the first 5 items in the list into the algorithm. Does the order in this list indicate the priority the algorithm should give to these optimizations at runtime, sacrificing optimizations lower on the list for those higher up?

shereefb commented 8 years ago

@bundacia answers below

How would you feel about using YAML to allow more flexibility in adding config options that aren't always threshold-specific (like how to weight different optimization priorities in team formation, etc).

I feel great about it. Implementation is totally up to UX and engineering. I bow to your wisdom.

Instead of uploading locally stored a file via a slash command (which would be tricky to do in echo), how would you feel about passing a URL to the file as an arg to the command. You could just drop the config file in a gist for instance.

Yep. Works great for me. Again, totally upto UX and engineering to decide. As game mechanic, I was just trying to communicate the information that moderator needs to convey. How it happens is total up to other people.

What does Number of Concurrent Projects indicate? Is it the max number of concurrent project an advanced player can be on, or the max number of project that can be formed in that pool?

Max number of concurrent projects an advanced player can be on.

I'm not sure I understand why the two files specify different ELO ranges. Can you explain the difference? I assume the ranges in the pool threshold config file specify the ranges for pools, but what are the ranges in the team threshold file for? Could this all just be in one file like this?

The pool threshold file specifies ranges that "slice" the chapter into pools.

The team threshold file specifies thresholds that indicate how many teams a player can be on, and how how big of a team they can handle. As moderator, I can use this to specify that people under 1100 can't lead multiple teams, and can't lead teams of a size more than 4.

Each file is used at a different stage and serves a different function so we shouldn't combine them.

shereefb commented 8 years ago

@bundacia more answers:

Does that apply even if the recommended team size for the goal is 2? If not, how is it different than

There is no recommended team size 2. Three is the minimum. However, the algorithm can go plus or minus one on a team size 3.

(2) Just want to confirm that this list is in priority order. For instance, assume we are able to incorporate the first 5 items in the list into the algorithm. Does the order in this list indicate the priority the algorithm should give to these optimizations at runtime, sacrificing optimizations lower on the list for those higher up?

Sacrificing lower priorities on the list for those higher up.

tannerwelsh commented 8 years ago

Had some similar questions to @bundacia. Beyond that, I think this is close to an MVP. Comment:

They can only see goals and votes by other players in their pools.

What does it mean to "see goals" by other players in their pools? If this means "only goals authored by other players in their pools", it seems like an unnecessary block. Please consider removing.

IMHO, the players only need to see the votes of other players in their pool, and to not see the votes of players in other pools. The goal library is the same for all pools and all players.

tannerwelsh commented 8 years ago

Each file is used at a different stage and serves a different function so we shouldn't combine them.

Not sure I agree with this. First of all, we shouldn't be thinking of them as different files, but just as settings/parameters for different algorithms. From this standpoint, we should consider whether they are independent parameters, or whether they would be better served by a helpful abstraction that wraps both.

If they're independent, then we're saying that there are two ways to group players and allow/restrict behavior (leading multiple teams, voting, etc.).

A unified approach could take the form of something like the notion of tiers in a league sport.

A tier would represent a band of Elo ratings which confer different behaviors to voting and to team formation. And players from higher tiers can be assigned as advanced players to lower tiers (could even call them team captains for the sake of continuing this metaphor).

So, let's say the moderator can configure the tiers like this:

Tier Elo Range Max. Concurrent Projects Max. Project Size
1 0..1000 1 4
2 1001..1200 2 5
3 1201..1400 3 6
4 1401..1600 4 8

And then the moderator can assign players from higher tiers to be advanced players (team captains) on lower tiers, where they would vote with and abide by the project concurrency restrictions of their play tier.

With this, we have the abstraction of saying "player X is playing in tier N" or "player Y is in tier O but is playing as an advanced player in tier P".

That way, with the knowledge of a player's Elo rating, you know which tier they play in and thus which settings apply. If they are assigned as an advanced player, you would only have to know which tier they are assigned to in order to derive which settings apply to them.

All settings of the tiers can be changed to accommodate different groupings, and advanced players can be chosen independently of the tier settings.

If there are different configurations for each algorithm, then to derive all of the above data you'd need to consider both sets of configurations for each player's Elo, which makes the results harder to reason about, because advanced players who are part of the same voting pool may have Elo ratings that straddle a team threshold, which means the team formation part will work differently.


Endnote: I see the value of clearly identifying these two steps as separate processes, but I think it's overly complex to associate behaviors with a particular set of Elo rating bands, and then associate different behaviors with a different set of Elo rating bands. I predict this will be prone to logical errors in experimentation phase, and will be very difficult to communicate clearly to the learners.

bundacia commented 8 years ago

I agree that putting everything in one file makes sense. But that's probably a UX discussion anyway, it doesn't change the mechanics of the algorithm so we can probably punt on that decision for now.

What is more of a mechanics thing is whether we want 2 separate sets of bands for pools and teams. I agree with @tannerwelsh that a single set of bands is simpler and less error prone (if less flexible). I'd lobby to go with a single band to start with, we can always switch to the more complex option if we feel we really need it.

shereefb commented 8 years ago

wether it's one file or two is not my call as game mechanics. OND. One file with two distinct sections.

What does matter to me as moderator is to have both ELO ranges be different. The ELO ranges used to slice a chapter into pools will almost definitely be different than the ones to define advance player limits.

shereefb commented 8 years ago

@tannerwelsh Just read through your tier comments/proposal. I love it, and I think that's where we might end up in V2 or V3 of this. But right now, I have no evidence that each tier would be able to provide advanced players to the next tier. It might end up being (for example) that only paid players and a handful of existing players can act as advanced players.

Forcing both algorithms to accept the same bands as parameters will be too restrictive for me initially as moderator.

First of all, we shouldn't be thinking of them as different files, but just as settings/parameters for different algorithms

+1. Totally agree.

If they're independent, then we're saying that there are two ways to group players and allow/restrict behavior (leading multiple teams, voting, etc.).

Not sure I follow. One way groups players, the other way restricts advanced players' parameters. You could say there are two ways to "group players and allow/restrict behavior" or you could say there is one way to group players and another to allow/restrict behaviors.

These are separate, independent parameters for two separate independent algorithms. As moderator, I will calibrate the team thresholds file based on how elo ratings translate to player's capacity to lead multiple teams of certain size.

I will calibrate the pool thresholds based on my sense of where the chapter is "clumping", the availability of paid players and moderators, and the size of conversations.

For example, I might experiment with 3 pools one cycle, then 4, then 2. Meanwhile the team thresholds stays locked and doesn't need to change.

shereefb commented 8 years ago

@LearnersGuild/los I'm hoping to move this to UX by end of this week.

Anything unclear about this design? Anybody want to raise objections, have ideas for making it simpler, better, sexier?

@tannerwelsh , @bundacia are your points addressed? Want to schedule time with me to discuss? @prattsj what do you think?

bundacia commented 8 years ago

@shereefb: my questions have been answered. I agree that the decision on having 2 sets of ranges vs 1 is definitely game mechanics's to make.

I'd love to start work on this before the end of the week. Are there particular things about the spec that you are still hashing out? For example, if we know for sure we want voting pools there is work I can start doing now that won't be impacted by other smaller details.

tannerwelsh commented 8 years ago

I feel heard, and will abstain. You're speaking mostly from your role as moderator, which I can't understand as closely not having had the same experiences. Fine to move forward by me.

One clarification:

But right now, I have no evidence that each tier would be able to provide advanced players to the next tier.

I'm not saying that all advanced players have to come from the tier directly above. I'm merely saying that the rules for governing team creation restrictions (and by proxy, advanced player restrictions)—size, and number of teams that an advanced player can play on—should reside with the same Elo group as the voting pool.

Advanced players can be hand-picked from any tier, as far as I'm concerned, but once they become an advanced player, they abide by the team-formation restrictions of their "active" tier (i.e. the one they're playing in, not the one corresponding to their Elo rating).

tannerwelsh commented 8 years ago

After convo w/ @shereefb, updated specs to remove the "max team size" component from team formation threshold.

The only functional difference between advanced players and regular players is that some advanced players can be on more than one team. The number of teams they can join is determined by their Elo rating, which the moderator can adjust.

heyheyjp commented 8 years ago

@shereefb, @tannerwelsh: alrighty. Fresh round of reactions and questions on deck. I'll just post one at a time as I sort them all out - beginning with a couple of "big picture" tensions, then moving on to some clarifying questions about the design so far and one or two small nitpicks.

Bigger Picture Tension 1

First up, I have this nagging feeling about the significant increase in focus/strength on the positioning of players as competitors. I want to question the implied assumption that this will be in the learners' best collective and individual interests.

Thinking of the projects as games in which players compete against each other to contribute the most value per hour assumes that software product development is a zero sum game (it isn’t, of course) and seems to encourage a kind of adversarial perspective that might work directly against our objective of helping them become master collaborators. If two advanced players and one super junior player work together and all contribute significantly relative to what would be expected of them given their level of expertise, shouldn't they all see a meaningful gain in rating? Why should someone have to lose in order for someone else to win?

At some point, we're going to be called on to explain how the new ratings work and why they are designed to work the way they do. When we tell them that the ratings essentially are determined by pitting each player against each of their teammates, and that in being determined to have "won" - having contributed at a higher rate than another person on their team - something valuable is actually taken away from this person they’re supposed to be supporting and with whom they're expected to find a way to work most productively with....when we explain this....I dunno...feels like a potentially serious issue on the horizon. What am I missing? I still don't feel like I have a solid handle on the Elo rating system and how it's meant to be used here, so I'm sure there's something I'm not seeing about its suitability. Gonna go do a bit more review and read the linked resources. :)

Sidenote: I was really hesitant to voice this tension because so much time has been invested here. There's all of this amazing work that's been done. I want it to be the right direction because:

Just wondering if we're introducing a new and serious problem along with the solution..

whack-a-mole-kitty

heyheyjp commented 8 years ago

Bigger Picture Tension 2 (that probably doesn't really belong here but is relevant enough that I'm hoping to get away with addressing it in part here anyway)

We have at least three classes of end user to consider: player, moderator, and game mechanics designer. All three of these end user types need to be able to consume information from the system. The player and moderator users, of course, need to also be able to enter information directly into the developed system.

Right now, the experiences of the game mechanics designer and the moderator are getting the lion’s share of the attention. Player UX too often comes as an afterthought. I worry about this.

It seems risky to assume that we can safely sign off on the design for project formation mechanics without having solidified the basics of what the end-to-end experience will be for every end user type — risky for the development timeline in that something might surface that causes us to have to undo work done based on the assumption that game mechanics requirements will not need to change and/or risky for one or more end user types in that their experience will almost certainly end up getting shafted in some way in the interest of not having to undo work already done or committed to.

If we decide to not do it this time, I hope that we can get to a point to where we’re at least considering the experiences of every end user affected, treating all of their experiences as first class priorities, even if we still ultimately prioritize the quality of the UX for some users over that for others. At least then it would be intentional and explicit. :)

shereefb commented 8 years ago

Thanks for the thoughtful responses @prattsj

I think the discussion re: UX comments is best left for @jeffreywescott and @tannerwelsh to address. I'll only say that I agree with you we should think of end-to-end player experience before implementing anything.

Right now, the experiences of the game mechanics designer and the moderator are getting the lion’s share of the attention. Player UX too often comes as an afterthought. I worry about this.

The way I see it that's by design when it comes to how our process works. This is just the review of the game mechanics phase, next step this game mechanic goes to UX to address just that. I don't think it's an afterthought, it just comes next in line in terms of order.

shereefb commented 8 years ago

re: Bigger Picture Tension 1:

Thinking of the projects as games in which players compete against each other to contribute the most value per hour assumes that software product development is a zero sum game (it isn’t, of course) and seems to encourage a kind of adversarial perspective that might work directly against our objective of helping them become master collaborators.

I agree. Which is why they (and we) shouldn't think of projects as competitive games. They're not. They're simulations of real-world projects.

ELO and XP compare players to each other, they don't pit them against each other. And the relative contribution sliders during the retro already do that, and it hasn't been framed as a competition by learners as far as I can tell.

If two advanced players and one super junior player work together and all contribute significantly relative to what would be expected of them given their level of expertise, shouldn't they all see a meaningful gain in rating? Why should someone have to lose in order for someone else to win?

Elo ratings are by definition a comparison stat. They compare one player's capacity to contribute to a collective project to another. In the scenario above, if all players' contribute in a way that their elo ratings predict, then nobody's rating changes. Everyone's XP on the other hand, goes up.

At some point, we're going to be called on to explain how the new ratings work and why they are designed to work the way they do.

As Moderator I don't intend to wait to be called on. @tannerwelsh and I have been publishing every stat and how it works to the playbook before learners see the numbers. The more we are transparent about how these stats work, the more meaningful they will be to the learners.

When we tell them that the ratings essentially are determined by pitting each player against each of their teammates, and that in being determined to have "won" - having contributed at a higher rate than another person on their team - something valuable is actually taken away from this person they’re supposed to be supporting and with whom they're expected to find a way to work most productively with....when we explain this....I dunno...feels like a potentially serious issue on the horizon.

I agree that there's a risk here.

Here's how I would frame it:

Our best definition of what a web developer is: Someone who has the skill to contribute (on a team) to a web project and is a net positive to their team's culture.

Beginner web developers start off with shitty contribution and team skills. They get better over time. When they reach a certain threshold other web developers want to pay them to build software with them.

We need stats that track both these dimensions.

Support stats are meant to indicate a player's positive contribution to team culture. XP is meant to be a cumulative stat that shows how much "group building" a player has done during their time at LG. ELO is meant to be a relative rating that shows how skilled a player is at contributing compared to other players at LG.

All our stats are relative and subjective by design. They depend 100% on who is giving the feedback.

In order to measure a player's positive culture contribution, we consistently ask people who work with them wether or not (and to what degree) they contributed.

In order to measure a player's capacity to contribute to a team project, we consistently compare their contribution to that of professional web developers whom we know can get paid to write software.

This comparison is reflected in the ELO rating. We are not pitting players against each other, we are comparing their contribution to reflect their relative skills (and progress) back to them.

In the scenario you talked about where a junior player plays with two advanced players, they should be thrilled. Not only is it a chance to learn, their ranking gets adjusted to more accurately reflect where they stand in relation to these players. They have a more solid idea of how close they are to their goal of getting paid to write software.

For sake of argument, take a simplistic, 1-dimensional view of web development as a skill, and assume there is a unit that measures how much Jared can contribute to a software project in a fixed amount of time. Jared has 100 units.

Now take a simplistic view of the market for developers, and assume we can determine that employers are willing to hire software developers if they are half as productive as Jared.

Wouldn't you (as an aspiring software developer) want to track your productivity/skill/capacity against Jared's? Wouldn't you want to watch that number grow with time?

To track against is the same thing as to compare.

I would also argue, that the best technical interviews are essentially what we're doing with each project. A session, where you build software in team, and compare the interviewee's capacity to develop software with yours (or other people on the team). Based on the relative skill level, you determine wether or not to hire that person (and how much to pay them).

something valuable is actually taken away from this person they’re supposed to be supporting and with whom they're expected to find a way to work most productively with....when we explain this....I dunno...feels like a potentially serious issue on the horizon. What am I missing? I still don't feel like I have a solid handle on the Elo rating system and how it's meant to be used here, so I'm sure there's something I'm not seeing about its suitability. Gonna go do a bit more review and read the linked resources. :)

I think you're 100% correct in that things can be perceived in this way. But it would be a grave misunderstanding. To demonstrate, think of an extreme scenario where we figure out an amazing accelerated pedagogy: Everyone in the guild learns at the same pace, and achieves the exact set of skills that Jrob has by the end of month 6.

If everyone (10 people) start at ELO 1000, and Jrob starts at ELO 1400. By the end of month 6 everyone will have a rating of 1036 (including jrob). The rating in and of itself doesn't mean anything. In fact its meaning changes with time as Jrob's rating continues to drop from 1400 all the way to 1036. What's meaningful is "where am I, compared to Jrob?" "am I as capable of contributing to a project as he is?"

As I play different games during these 6 months, I can see myself getting closer to that goal (or further). With every game I play, I get an accurate reflection of where I am. Nothing is taken from me. It's just a more accurate GPS location.

shereefb commented 8 years ago

Sidenote: I was really hesitant to voice this tension because so much time has been invested here. There's all of this amazing work that's been done. I want it to be the right direction because:

It seems to be able to solve some serious problems with the current/old approach. It's f***ing awesome.

I'm SO glad you did. I have yet to experience anything but gratitude to you for sharing your seeing with me.

That's exactly the kind of feedback/pushback/challenge that makes our design better. Please keep holding your tensions tightly and bringing them to surface. My role's purpose is better served when you do that!

bundacia commented 8 years ago

Let's make sure we address these concerns about how ELO might be perceived when it comes to UX. For instance, we should give lots of context around the ELO score and never just display it in isolation. Some interesting ELO-derived stats that might provide more context could be:

Showing these types of stats (and how their changing over time) would help frame ELO and take the focus of whether my ELO went up or down, which is a meaningless fact on it's own.

I don't want to derail this thread in a UX discussion so I'm going to put these suggestions in an Asana task for the UX role to consider.

tannerwelsh commented 8 years ago

I like all the discussion here, and haven't seen any blocks that would keep it from moving forward. @shereefb, as the person assigned to this issue - have you received enough feedback? Ready to move to the next board?

(FYI, I asked @jeffreywescott as governor to clarify the accountability for "moving issues from one board to the next": https://app.asana.com/0/68600949079872/169000672162213)

bundacia commented 8 years ago

@shereefb one more clarifying question:

If the moderator assigns more advanced players to a pool then is needed to build a set of optimal teams, how should it behave. For instance, Imagine this scenario:

There are 10 players in a pool. 3 of those players are advanced players chosen by the moderator. Everyone votes for the same goal, which has a team size of 5. The best team assignment is 2 teams of 5, but that means one team will have 2 advanced players. Is that OK? If not we need to add "Teams have exactly one advanced player" to the Constraints section. If it is ok, then we may want to add "Teams have exactly one advanced player" to the Prioritized Optimizations section in the appropriate location if we can at least shoot for that but know when to give up in favor of more important things.

bundacia commented 8 years ago

Along those same lines, we should add this to the Prioritized Optimizations as well:

Minimize the number of teams a player is on

Since I figure that, all else being equal, it makes sense to have people on fewer teams.

shereefb commented 8 years ago

Great catch @bundacia . Added "Minimize the number of teams a player is on" as an optimizing function

shereefb commented 8 years ago

@jeffreywescott this is ready for UX and implementation. @bundacia had mentioned that there are some pieces that aren't UX dependent and can be carved out straight into backlog.

tannerwelsh commented 8 years ago

Closing since this has been pushed to implementation: https://github.com/LearnersGuild/game/issues/418

jeffreywescott commented 8 years ago

Reopening because it is dependent upon #9.

shereefb commented 8 years ago

@jeffreywescott this is RFI now. We extracted #55 from #9 so it doesn't block it.

We'll keep playing with the right K-factor for the next two weeks, but it shoudn't be blocking the implementation.

RFI /cc @LearnersGuild/software

jeffreywescott commented 8 years ago

Okay.