couch2code / couch2code.com

Couch to Code - A social application that inspires developers to continue to practice to push
5 stars 1 forks source link

Challenges and Scoring #2

Open twilson63 opened 10 years ago

twilson63 commented 10 years ago

@jsouthard I have an idea regarding the challenges and scoring.

When entering a challenge you initialize it with a difficulty rating using the same pattern as in golf.

par 3 par 4 par 5

etc.

Then when each player participates and is reviewed, the review gives them a score of

This can make it super easy to create a avg grade for a player

@twilson63 is currently 2 under par and has played 30 challenges with an average par of 5

@jsouthard is currently 4 under par and has played 20 challenges with an average par of 4.74

Then the leaderboard can reflect a weighted setting of number of challenges vs current par score.

Maybe you have to get reviewed at least 3 times to post your score?

Thoughts?

jsouthard commented 10 years ago

I like the idea of keeping the scoring simplified. I remember when we talked you floated the idea of trying to keep the scoring along the lines of simple up-down type of voting.

I think the difficulty with par is that both the initial rating and the reviewer rating will still largely be subjective.

Running with the idea of up/down voting, can I share what I was thinking about this morning? (I'll make another issue post with the details)

twilson63 commented 10 years ago

Yeah, it would be at first, but over time the averages would actually work it out.

PAR is the average score for a given challenge, looking forward to how the up/down rating could prove to be more accurate.

I think as in golf you can play a hole or course many times an either improve or regress, I would expect that developers would like to do the same.

Take conways game of life for example, the fact that I completed it should not be enough, I should have some sort of quality outcome of the implementation which leads to a score, the be encouraged to play the challenge again and improve, continue to iterate. The data could lead to interesting outcomes and analysis, also the challenge does not have to be ranked by the creator of the challenge, it could be again based on avgs and create a scale criteria of excellence:

1 - Perfect 2 - Very Above Average 3 - Above Average 4 - Average 5 - slightly below Avg 6 - areas for improvement

Then when the reviewers rate a challengers code it is a little less than subject and you need three reviews to have your challenge submission registered.

Anyway just some thoughts.

Looking forward to your thoughts

Tom

On Sat, Feb 22, 2014 at 10:02 AM, James Southard notifications@github.comwrote:

I like the idea of keeping the scoring simplified. I remember when we talked you floated the idea of trying to keep the scoring along the lines of simple up-down type of voting.

I think the difficulty with par is that both the initial rating and the reviewer rating will still largely be subjective.

Running with the idea of up/down voting, can I share what I was thinking about this morning? (I'll make another issue post with the details)

Reply to this email directly or view it on GitHubhttps://github.com/couch2code/couch2code.com/issues/2#issuecomment-35804573 .

Tom Wilson Jack Russell Software Company Division of CareKinesis 494 Wando Park Blvd Mount Pleasant, SC 29464 Phone: 843-606-6484 Mobile: 843-469-5856 Email: tom@jackhq.com Web: http://www.jackhq.com Calendar: http://www.google.com/calendar/embed?src=tom%40jackrussellsoftware.com&ctz=America/New_Yorkhttp://www.jackhq.com/calendar

This e-mail may contain information that is confidential, privileged or otherwise protected from disclosure by the Health Insurance Portability and Accountability Act (HIPAA) and other state and federal laws. This information is intended only for the individual names above. Any review, use disclosure or dissemination of this material is strictly prohibited. If you receive this information in error, please notify CareKinesis immediately at 888-974-2763 and delete the original at once.

jsouthard commented 10 years ago

I think we're on a similar page in wanting challenges to have more depth than binary outcomes (completed vs not) and to provide an opportunity for continued improvement of the answer.

The proposal I was considering builds off of the idea of the up/down voting you mentioned before, but adds extensions/bonuses/subchallenges which can each individually get an up/down vote.

Summation Challenge

Challenge: Write a program which takes a positive integer and computes the sum of all positive integers between it an zero and prints the result Alternative approach extension: Demonstrate the ability to complete the sum both iteratively and recursively Flexibility extension: The program produces correct results for negative number inputs too Reliability extension: The program ignores inputs which are not integers Scalability extension: The number of computations for the program scales better than O(n) Usability extension: The program notifies the user when an input value exceeds what can be correctly computed Testing extension: The submission includes unit tests validating each claimed extension

I think there are multiple things that can improve a challenge submission and this is an attempt to give some concreteness to "better" while also opening the door to a lot of potential depth in the challenges.

To build on the par idea, which I do like BTW, you could break out different stats on the user groups based on the average completion. For example, counting the initial challenge and each completed extension as 1 (although you could implement a different weighting system):

Summation Challenge:

As the size of the challenge database grows, you could treat the par rating as a difficulty rating, (e.g. challenges where students par 3 or greater will have easier objectives than challenges where students par at < 1)

As you see above, I tried to keep the extensions thematic. I was thinking that if the extensions have themes, then in collecting / presenting stats for individual users, you should show that they, for instance, completed 90% of the Usability sub-challenges but only 20% of their Scalability ones - giving some direction on the kinds of things they have demonstrated well and areas they can focus on stretching a little.

I think there's some substantial overlap between what you've proposed and what I was thinking - what are your thoughts on this approach?

jsouthard commented 10 years ago

Any thoughts on my suggestion? We can focus on only having one item to a challenge initially, of course - but I'd like to factor in the flexibility to implement this approach if we're not willing to rule it out. (No extensions being a specialized case of challenge + extensions)

twilson63 commented 10 years ago

Hey James,

So far I think I like it, but may need some time to fully digest, Maybe we should put a few challenges together to get an idea of how the data object graph may look for a challenge.

Here are my thoughts

challenge ----- submission (1 - n)
               -------   review  (1 - n)

So my initial ideas of a challenge, which could be anything BTW, but these are my thoughts

etc.

Are you cool with the challenge -> submission -> review?

What I recommend is we get the basic structure reading and writing and working, then meetup and come up with a 1st run at the scoring formula, and get some people to use and get feedback and iterate.

What do you think? I am not married to particulars, I just want the scheme to provide true value and not be a popularity contest.

Thanks

Tom

On Tue, Feb 25, 2014 at 7:07 PM, James Southard notifications@github.comwrote:

Any thoughts on my suggestion? We can focus on only having one item to a challenge initially, of course - but I'd like to factor in the flexibility to implement this approach if we're not willing to rule it out. (No extensions being a specialized case of challenge + extensions)

Reply to this email directly or view it on GitHubhttps://github.com/couch2code/couch2code.com/issues/2#issuecomment-36075510 .

Tom Wilson Jack Russell Software Company Division of CareKinesis 494 Wando Park Blvd Mount Pleasant, SC 29464 Phone: 843-606-6484 Mobile: 843-469-5856 Email: tom@jackhq.com Web: http://www.jackhq.com Calendar: http://www.google.com/calendar/embed?src=tom%40jackrussellsoftware.com&ctz=America/New_Yorkhttp://www.jackhq.com/calendar

This e-mail may contain information that is confidential, privileged or otherwise protected from disclosure by the Health Insurance Portability and Accountability Act (HIPAA) and other state and federal laws. This information is intended only for the individual names above. Any review, use disclosure or dissemination of this material is strictly prohibited. If you receive this information in error, please notify CareKinesis immediately at 888-974-2763 and delete the original at once.

jsouthard commented 10 years ago

Agree with the structure of challenges having (0..n) submissions which have (0..n) reviews.

The idea I'm proposing would add something like "goals" under challenge where each challenge has at least 1 functional goal. Said differently, each challenge has one or more required goals and zero to n optional goals.

As far as challenges, I think clearly described challenges makes the challenges accessible to a broad skill range. I had envisioned challenges starting along the lines of interview questions / programming homework assignments. The idea of additional optional goals increases the depth or breadth of the problem for further skill enhancement, but leaves the "minimum to play" low enough.

From the examples you listed, I think Conway's game of life is very much along the lines of what I was thinking as it has a fairly simple ruleset, implementation criteria, and initial conditions for "done". Even with that simplicity, it has room for extending the goal set to be a rich challenge.

The Todo application you mention makes me even more inclined to have some structure of laying out individual goals for the challenge to make the required functionality clear and proactively identify opportunities for improving.

I'll try to riff on those two in light of a "multi-goal" implementation and add a few more examples of my own.

Back on the topic of the data - I think there's additional value in having an up/down tally on submission reviews to rate their 'helpfulness'. I'd prefer a mechanism to bury "you suck, do better" comments under "here's a specific improvement for your submission"

twilson63 commented 10 years ago

Sounds good, goals sound good, can you work on a ux mockup of the challenge form as you envision? Does not have to be fancy or you could just hard code the main form.

Thx

Tom

Sent from my iPhone

On Feb 26, 2014, at 7:59 AM, James Southard notifications@github.com wrote:

Agree with the structure of challenges having (0..n) submissions which have (0..n) reviews.

The idea I'm proposing would add something like "goals" under challenge where each challenge has at least 1 functional goal. Said differently, each challenge has one or more required goals and zero to n optional goals.

As far as challenges, I think clearly described challenges makes the challenges accessible to a broad skill range. I had envisioned challenges starting along the lines of interview questions / programming homework assignments. The idea of additional optional goals increases the depth or breadth of the problem for further skill enhancement, but leaves the "minimum to play" low enough.

From the examples you listed, I think Conway's game of life is very much along the lines of what I was thinking as it has a fairly simple ruleset, implementation criteria, and initial conditions for "done". Even with that simplicity, it has room for extending the goal set to be a rich challenge.

The Todo application you mention makes me even more inclined to have some structure of laying out individual goals for the challenge to make the required functionality clear and proactively identify opportunities for improving.

I'll try to riff on those two in light of a "multi-goal" implementation and add a few more examples of my own.

Back on the topic of the data - I think there's additional value in having an up/down tally on submission reviews to rate their 'helpfulness'. I'd prefer a mechanism to bury "you suck, do better" comments under "here's a specific improvement for your submission"

Reply to this email directly or view it on GitHubhttps://github.com/couch2code/couch2code.com/issues/2#issuecomment-36122035 .

jsouthard commented 10 years ago

absolutely, I was already planning on doing some mockups to share what I'm visualizing On Feb 26, 2014 8:16 AM, "Tom Wilson" notifications@github.com wrote:

Sounds good, goals sound good, can you work on a ux mockup of the challenge form as you envision? Does not have to be fancy or you could just hard code the main form.

Thx

Tom

Sent from my iPhone

On Feb 26, 2014, at 7:59 AM, James Southard notifications@github.com wrote:

Agree with the structure of challenges having (0..n) submissions which have (0..n) reviews.

The idea I'm proposing would add something like "goals" under challenge where each challenge has at least 1 functional goal. Said differently, each challenge has one or more required goals and zero to n optional goals.

As far as challenges, I think clearly described challenges makes the challenges accessible to a broad skill range. I had envisioned challenges starting along the lines of interview questions / programming homework assignments. The idea of additional optional goals increases the depth or breadth of the problem for further skill enhancement, but leaves the "minimum to play" low enough.

From the examples you listed, I think Conway's game of life is very much along the lines of what I was thinking as it has a fairly simple ruleset, implementation criteria, and initial conditions for "done". Even with that simplicity, it has room for extending the goal set to be a rich challenge.

The Todo application you mention makes me even more inclined to have some structure of laying out individual goals for the challenge to make the required functionality clear and proactively identify opportunities for improving.

I'll try to riff on those two in light of a "multi-goal" implementation and add a few more examples of my own.

Back on the topic of the data - I think there's additional value in having an up/down tally on submission reviews to rate their 'helpfulness'. I'd prefer a mechanism to bury "you suck, do better" comments under "here's a specific improvement for your submission"

Reply to this email directly or view it on GitHub< https://github.com/couch2code/couch2code.com/issues/2#issuecomment-36122035

.

Reply to this email directly or view it on GitHubhttps://github.com/couch2code/couch2code.com/issues/2#issuecomment-36123401 .