coala / meta

A repository for non-code activities, such as engagement initiatives, and other meta issues
6 stars 5 forks source link

GCI repeatable tasks: bear asciimena #47

Closed jayvdb closed 7 years ago

jayvdb commented 7 years ago

There could be two tasks :

  1. Create a missing asciinema , with only a single set of errors, using the bad code in the bear unit test. This should be a GCI "easy" task, and very well documented. All participants are expected to do one of these "Easy" tasks, and can only do two, before they must move on to harder tasks.

  2. Advanced asciinema , showing at least one message from each severity supported by the bear (INFO, NORMAL, MAJOR)

A list of bears without a asciinema needs to be maintained during the GCI period.

And we need a list of bears with only a basic asciinema, which are suitable to be replaced with a more advanced asciinema.

sils commented 7 years ago

@jayvdb I think we can get better results by generating asciinemas. This should be super easy with termtype (if @adtac fixes that nondeterministic character getting wrong thing) or we could even generate it directly.

Point is: if we generate them, we can reproduce that at any time with updated content, we have more consistent asciinemas, less redundancy (want a better asciinema? Improve your tests!) and we can control typing speed and stuff like that.

If we do it with GCI it's a lot of manual work for them and us to mentor them. I'm currently leaned against this, more towards generating it, possibly with https://github.com/ctate/asciinema

jayvdb commented 7 years ago

I am not confident that unit test are great walk throughs. They are better than nothing, and they help a newbie start without experience, but a good walk through has a different goal than a good unit test.

Maybe there is a task to compare asciinema with unit test; mark in spreadsheet which one is better; if asciinema is better, create issue to expand unittest

A follow up task is to write better unittest using asciinema.

sils commented 7 years ago

@adtac had an idea of letting them use the test generator and feed it with input so we gain the pros but can edit the content, @adtac can you elaborate on that? How much work is it from our side to get thegenerator going well enough?

adtac commented 7 years ago

We can have a template termtype or ctate/asciinema file and then fill in the blanks for each bear, I think. I haven't fully thought out the idea, but from the surface it looks pretty doable.

I can definitely do it in one hackathon (like 24 hours) after my exams (which end on Nov 26/27). That should be soon enough for GCI, no?

srisankethu commented 7 years ago

@adtac GCI starts on November 28th :(

sils commented 7 years ago

and the good people will start earlier according to @jayvdb

jayvdb commented 7 years ago

We can have a template termtype or ctate/asciinema file and then fill in the blanks for each bear, I think.

Indeed. That is the level of help we should be giving them in the task description.

@adtac GCI starts on November 28th :(

That isnt a problem. We write the description now, and improve the description and methodology as we can find time, and when we notice better methods.

sils commented 7 years ago

@srisankethu assigning you then, you finish up writing the asciinema tasks right?

jayvdb commented 7 years ago

I've significantly expanded https://github.com/coala/coala/wiki/Google-Code-In-Task-Simple-asciinema , but some sections are still quite vague. Every step needs to be 100% explained. This is the first task the participants will see. Test it, by stepping through the task, and record any detail not yet described.

srisankethu commented 7 years ago

@jayvdb Awesome! Thanks! I will make a note to explain each step a 100%

jayvdb commented 7 years ago

I think this is about as good as we'll get.