topcoderinc / cab

9 stars 3 forks source link

extra feature policy #74

Closed lstkz closed 6 years ago

lstkz commented 7 years ago

Old thread https://apps.topcoder.com/forums/?module=Thread&threadID=883008&start=0

I can't find the correct quote, but it was rejected by the previous CAB because "we will have a new scorecard and this section will be removed". As we can see it was 1 year ago, and the code scorecard is used in every challenge.

The problem: a) Small enhancements are never considered as an extra feature because they are too trivial. b) Some members try to implement big features. c) As a submitter, I am forced to describe a full list of all possible features in my readme, and the reviewer must pick one of it. d) Even if I implement something extra and I forget to mention it in c) I can't re-appeal result. The copilots always reject it, because it didn't write it in my appeal. e) This is highly subjective. It happens very often that they reject my extra features, but they give me points for something that should be the base requirement.

My suggested solution.

hokienick commented 7 years ago

Talk with Mess and Dev team, suggestion is to remove this from scorecard. If anything, one that has it, one that doesn't. Bring in James Cori, already involved with scorecard and has thoughts.

f0rc0d3r commented 7 years ago

I've just seen the old thread and I'm amazed about the examples given in the last post.

At least I'm going to highlight this sentenced provided in the thread as I think this is the clue:

You should provide a new big rule saying that: problematic extra-features or not needed ones will cause to loose points.

I see that it's easy to google or recycle code solutions and to paste them into projects to win extra points and that it's more difficult to be attached to specific requirements, that's why the prize should go to the one who codes exactly what it's asked for.

Also: extra-features should be listed by the developer as possible enhancements of his code. If the client wants this extra-features they should pay for that (even if that's a very little payment). This could be treated as a new optional rule so it's up to developer to suggest things that can be easily added to the code to make it better which will mean a greater involvement in the project meaning a deepest understanding of it. Making this you avoid this scorecard thing and the subjective treatment of this features.

f0rc0d3r commented 7 years ago

Also: the code should be maintained and documented (developers use to forget documentation)...

I don't know how topcoder delivers final projects but before a code contest there has been some architecture phases and definition of requirements, documentation: adding new things means that you are not considering previous requirements and this use to be bad for projects... also everything should be documented and maintained...

hokienick commented 7 years ago

@wdprice and I to discuss this for the first time 07/06.

Final resting place in Help Center.

wdprice commented 6 years ago

Hey guys - from my point of view the issue here seems to be primarily around Question 1.1.7 in the Code Default 0.0.2 Scorecard - Extent to which the submission adds any additional functionality or features that were not requested but beneficial?

I agree with the idea that this ends up being very subjective and defeats the purpose of having an objective scorecard based review.

To address this, I would like to retire that scorecard and ensure the new default is Code Generic 0.0.1 - which was the product of the work done by James Cori and callmekatootie - and reviewed and discussed by the the community.

At that point I think we can clearly say - "You are judged according to the scorecard and the specification." when it comes to Extra Features.

Thoughts?

ThomasKranitsas commented 6 years ago

Hey @wdprice! As I suggested in one of our (CAB) meetings section 1.1.7 should either be removed or be optional.

Now for the Code Generic scorecard.. I quite like this scorecard as it's easier to determine where to put any response during the review phase. The only thing I don't like is the Testing section as unit testing is not required in most of the challenges.

I would rather consider unit testing (if it's required) as Major requirement

birdofpreyru commented 6 years ago

@wdprice @ThomasKranitsas Code Generic is a bad scorecard - even without any competition on a challenge, it is a way too easy to fail even with quite a good solution :) Its new iteration being prepared by @rootelement is a way better, I believe.

rootelement commented 6 years ago

I will provide a new scorecard this week. It will look a lot like: http://www.topcoder.com/scorecard/scorecard/show/30001971 But will have the testing section removed like this: http://www.topcoder.com/scorecard/scorecard/show/30002020

Then the first scorecard linked above will be set as the default for all coding challenges. I was told it had been already, hence my confusion as to why people were bringing up the extra-features section. That scorecard should've been deprecated as per this blog article: https://www.topcoder.com/blog/whos-keeping-score/

lstkz commented 6 years ago

@rootelement

hence my confusion as to why people were bringing up the extra-features section. That scorecard should've been deprecated as per this blog article: https://www.topcoder.com/blog/whos-keeping-score/

The problem is that you don't track challenges and copilots. And you assume that all copilots read your blog posts and use the new scorecard.

Last 500 challenges. Starts from 2016-10-20 (new code scorecard was live)

  '30001031': 6, // subjective
  '30001610': 416, // old code scorecard
  '30001620': 18, // subjective
  '30001823': 1, // subjective
  '30001881': 3, // subjective
  '30001971': 25, // new code scorecard
  '30001973': 2, // old code scorecard (backend only)
  '30002000': 1, // architecture
  '30002010': 15, // bug bash
  '30002020': 12, // new new code scorecard
  '30002042': 1 // custom nasa scorecard

If we exclude non-code contests: Old core scorecard was used in 92% (416/453) contests. New scorecard: 6% (25/453)

Last 50 challenges (1 full month)

  '30001610': 39,  // old code scorecard
  '30001620': 3, // subjective
  '30001881': 1,  // subjective
  '30002010': 2, // bug bash
  '30002020': 4, // new new code scorecard
  '30002042': 1  // custom nasa scorecard

Old code scorecard: 91% (39/43) new code scorecard: 0% New new code scorecard: 9% (4/43)

This issue was brought up by the community more than 1.5 years ago (there are many threads in the forum). Instead of trying to provide a quick solution, we hear all the time "don't worry, we are creating a new scorecard, and this issue will be solved". Currently, you say the same thing, and we must wait for another new new new new scorecard that probably won't be liked and used by copilots.

rootelement commented 6 years ago

No, you're misunderstanding. The Blog post was just to announce that the scorecard was coming. I was told by the R&D team that the first scorecard linked above was made the default on any newly created code challenge, and thought it to be so. There's nothing in the UI that denotes a default scorecard, and my copilots created challenge drafts, so I never noticed. Tony is on vacation this week, but as soon as I get this new revision (probably today) I will be sure it will be the default. I'm not expecting people to seek out that scorecard, the names are kinda vague and there's no description (which i have issues open about as well).

You have to remember that we (the architects) do this kind of work on our own time, between finding new work for the community, and running work on the community. We have no presales, and have no real hours provided to do this stuff. That is currently changing with self service through connect and because Wipro is allowing us to hire people, but it's a slow road, hence the slow turn-around.

wdprice commented 6 years ago

I also did some digging in Direct and your data is correct @lsentkiewicz - the old code scorecard will be deactivated as soon as we can. We're all in agreement it is an issue.

rootelement commented 6 years ago

Are you all able to see this one? http://www.topcoder.com/scorecard/scorecard/show/30002050

I've made it 0.0.3. I'm going to rename the 3 cards, deactivate the old "Extra features" and 0.0.1 versions, rename the 0.0.2 version "Time-based Results", and this one "Code - Generic", but there's a problem with the rename feature on saved, active scorecards (surprise)

I've cloned the 0.0.1 card, dropped the testing section (as it should be detailed in the spec, not hidden in the scorecard), and adjusted the scores to match the 0.0.2 time-based scorecard as I believe the weights are better and more fair.

Please review. When I get some time with R&D, i will clean up the naming, set the default, change the active states as mentioned above, and post here.

rootelement commented 6 years ago

Also, once those in this ticket review the scorecard, I'll open it up for comment in the forum post as well.

ThomasKranitsas commented 6 years ago

@rootelement the link you posted above redirects me to this http://take.ms/FGNcU I tried this link https://software.topcoder.com/review/actions/ViewScorecard?scid=30002050 but it says that the scorecard is not active (which I believe is normal)

rootelement commented 6 years ago

Ok, try now. I made it active.

kondakovdmitry commented 6 years ago

Why no one to defend extra-features section? It's a pain to see it dying.

I actually like extra-features section. It gives submitter an opportunity to stand out their submission given all other challenge requirements are fully met. I think, without this section submissions will be lower quality. Submitters will tend to just formally fulfill all the stated requirements as quickly as they can. But often some more effort is required to make a really quality submission (for example, to fix some annoying bug which was not required). Without extra-features in review, submitters will not be motivated to do this at all.

As for useless extra features, I think it's a reviewer's job to judge what is useful and what is not. I personally don't mind that reviewers review this section subjectively (even if it sometimes feels unfair). For that matter, all review is always subjective in some way. That's why we have more than one reviewer.

Please, don't kill extra-features section right away, give it another thought.

rootelement commented 6 years ago

The extra-features section will no longer be the default scorecard. I'm also not going to kill the Time-based results scorecard. Both will still exist, but neither will be the default.

Some features I've asked for on the platform are:

  1. Scorecards have a description and a base spec template attached to them.
  2. The names of scorecards be made more descriptive, and when chosen in direct, you can see a short and long description of why and when to use the scorecard, and can utilize the spec template if you want.

More scorecards will be coming out. One I'm currently working on is a scorecard to grade submissions on Cognitive projects, which can be relatively subjective (ie: if we're asking someone to flesh out a chatbot conversation). I think a scorecard that allows for innovation will be necessary (such as this challenge, where it clearly states it in the spec) is not only interesting, but necessary, when the client is looking for fresh ideas.

But i think having this ambiguity in the everyday code challenge submission scorecard can lead to the entire reason we're even having this discussion. If you're building a nodejs api with 5 endpoints, that question looms, almost paralyzing the member to "add something... anything!" to stand out.

kondakovdmitry commented 6 years ago

If you're building a nodejs api with 5 endpoints, that question looms, almost paralyzing the member to "add something... anything!" to stand out.

Yes, I know this feeling. But even though it's sometimes unpleasant, it forces me to pause and think a bit out of the box of the formal requirements, which is a good thing to do, I think. If nothing good comes to my mind - no problem, I'll submit without any extra. If I did something extra and reviewer found it useless - let it be. But at least there was an opportunity to make it better than asked and be rewarded for it. Making it faster does not do any good to client, because they would have to wait for the end of submission phase anyway.

As for the reason of the discussions around default scorecard, it seems they are more because of unbalanced weights of the sections and too discrete range of scores in the most relevant sections (just 4 values per question), than because of the extra-features.

OK, if the extra-features scorecard will still exist, it is good that copilots will have a choice. Maybe you are right, and without extra-features section by default it will work better generally, let's see.

hokienick commented 6 years ago

Scorecard is now updated correctly and set to default. Ticket closed!