Closed tommyschaefer closed 1 year ago
@tommyschaefer @exercism/ruby What do you think of generating individual issues for each of these?
Ideally an issue like that would explain in detail exactly what the process would be to do the work, and then we could label them with "good first patch" (and we wouldn't have to remember to check things off a todo list).
If there's interest for this, we should probably hash out the text of the issue here so that we can get it right before making a gazillion copies of it.
I'd like to have a go at changing the way generators work. This probably needs its own discussion issue, but I want to change some file names/locations, add a superclass with common methods, add auto version bump detection, change generator invocation script. But it will hopefully make it easier to write and run generators.
It would be good to be able to do this before we start spending a lot of time writing guides.
Once that is done, individual issues might be the way to go. Has that proven effective in other instances?
I'd like to have a go at changing the way generators work. [...] it will hopefully make it easier to write and run generators.
Cool! Yeah, would you open a separate discussion for that?
It would be good to be able to do this before we start spending a lot of time writing guides.
Agreed.
Once that is done, individual issues might be the way to go. Has that proven effective in other instances?
Yeah, it has been easier to manage than checkbox lists, especially if we have one top-level issue that each of the child issues links to, which gives us a visual "todo list" that shows how many of the sub-things are completed.
@kytrinyx Ah, yes! That makes a lot of sense to me!
Especially:
Ideally an issue like that would explain in detail exactly what the process would be to do the work, and then we could label them with "good first patch" (and we wouldn't have to remember to check things off a todo list).
Because some of the exercises already have shared test data in exercism/x-common, I'm thinking those might be especially well suited as a "good first patch".
Because some of the exercises already have shared test data in exercism/x-common, I'm thinking those might be especially well suited as a "good first patch".
Yeah, agreed. We have a bunch of issues for them over in "todo" but I'm starting to think that the todo repo is not a great idea afterall. What we really need to do is get all the other repositories under control :) https://github.com/exercism/todo/issues?q=is%3Aissue+is%3Aopen+label%3Ashared-test-data
While checkboxes aren't perfect, I took a moment and updated the lists :)
There are also some in the list of "needs canonical" where canonical does exist.
Tracking canonical data should be done in x-common, so it's not our problem. (When we have our xruby hats on.)
Knowing if an exercise has canonical data in x-common is something we should be aware of as xruby hat wearing folks.
Yeah, I added that section to make it easier to identify which generators would fit well under the good first patch
label. Unfortunately, it falls behind pretty quickly unless actively updated which I haven't had a chance to do recently... Do you all have any ideas for making the list less prone to falling out of date?
Even if we convert them all to issues, we'll still have to actively update the labels / descriptions based on the state of x-common (I think, anyway?).
Someone to run a script that can report the existence of the canonical compared to the list of exercises... and paste the resulting markdown...
@kotp Sounds great! I may play around with this tomorrow :smile:
Exercises with canonical data:
find ../x-common/exercises/ -name 'canonical-data.json' | sed 's#/canonical-data.json##;s#.*/##' | sort
If you're interested in working on these, have a look at some of the recent examples by @ajwann #583, #657, #658, #730 for up-to-date examples of what a good generator looks like.
This issue is now several years old, and the generator has not been updated to work with the current version of problem-specifications. Therefore this issue does not represent low-hanging fruit for new contributors.
If we do find ourselves with a working generator, then we can use bin/configlet sync --tests
to see which exercises need to be updated, and we can add a generator to tests as we decide to update them.
I would suggest that we close this issue.
I recently came across #381, and saw that @kytrinyx suggesting converting all existing exercises to use generated tests as a first step.
On PR #394, @Insti recommended creating an issue to track progress towards this goal.
Here's a list of exercises and whether they generate their tests:
Converted
In Progress
Has Canonical Data in x-common
Needs Canonical Data