exercism / problem-specifications

Shared metadata for exercism exercises.
MIT License
327 stars 544 forks source link

list of tracks with test generators #1411

Closed petertseng closed 5 years ago

petertseng commented 5 years ago

Well, since the repo of https://github.com/exercism/discussions/issues/155 got closed, it's not possible to edit the issue text there anymore. It's not clear that there's a better home for this issue. I don't want to put it in https://github.com/orgs/exercism/teams/track-maintainers because then people not in the org can't see it, and I don't think this information should be withheld from people outside the org.


These are anything that use the canonical-data.json file from problem-specification and generate a test suite to be delivered to students of a given track.

If your track has these, I would be interested to hear about it.

I hope this can help tracks that don't have generators evaluate whether to have them, and allow tracks that already have generators to learn from each other.

Questions I would like to ask:

This issue will be closed immediately, because there's no need to have it take up space in the list of open issues (there's no call to action). Of course, even after it is closed, please feel free to comment with any additional answers.

If as a result there are any proposed changes to the schema, an appropriate issue can be created for that.

To give us a head start, here is what I know of some languages' generators. Please forgive me for being greedy and filling in information for tracks that I am unfamiliar with. Please correct these or add any additional tracks I missed. In alphabetical order:

Bash

C

CFML

Common Lisp

Dart

https://github.com/exercism/dart/blob/master/bin/create_exercise.dart

Erlang

https://github.com/exercism/erlang/tree/master/testgen - see https://github.com/exercism/erlang/tree/master/testgen/src for per-exercise configuration.

Factor

Go

JavaScript

OCaml

Perl 6

Pharo (Smalltalk)

Python

Ruby

Rust

Scala

Vimscript

NobbZ commented 5 years ago

I was pushed to this list today and wanted to add erlang to the list.

How much additional code must you write to generate tests for each new exercise?

In the good case, there is not much to add, mostly mapping the named arguments from the JSON to an idiomatic order of positional arguments in the function call and spraying syntax generating functions around.

This is at least for the simpler exercises which do not require state keeping.

Of course we have to generate additional boilerplating code, if the test cases actually involve a sequence of calls of which we need to prepare state before and compare state after that.

Also sometimes when the test deals with complex data, we need to map that to more idiomatic ways of data keeping. Erlang doesn't have objects, and the use of maps is not idiomatic for most of those types, such that we have to translate them into something that is called a "record" in erlang speak, basically a tuple with syntactic sugar such that we can use some syntax to refer to fields by names.

Although all the inputs to an exercise are guaranteed to be under the input key, for exercises with more than one input, it's not certain what order they should go in. How do you deal with this?

We manually map them into an idiomatic positional argument schema, as outlined above.

Are there any possible changes to the canonical JSON schema that would make generation easier?

I have not yet seen those.

Stargator commented 5 years ago

Following up on @NobbZ's comment, Dart also has a test generator. It's in the bin/create_exercise.dart file.

How much additional code must you write to generate tests for each new exercise?

The test generator does not meet all of our needs. So sometimes it may not accurately define what the expected value's type should be. In those cases it just sees an empty list and doesn't know if that's suppose to be a List of objects, strings, or anything. But that's just changing the type in a handful of cases.

Although all the inputs to an exercise are guaranteed to be under the input key, for exercises with more than one input, it's not certain what order they should go in. How do you deal with this?

We generally take the first input and make it the first parameter.

Are there any possible changes to the canonical JSON schema that would make generation easier?

Unsure, as mentioned above specifying the type when the value is an empty collection (list, map, set, etc) could be useful, but types differ from one language to the other and I think that could make it more confusing for the maintainers of the test generators.

petertseng commented 5 years ago

Need to add https://github.com/exercism/python/blob/master/bin/generate_tests.py to this list

The discussions repo got reopened so now there are two editable versions of this issue, but I'll decide that this one will be the canonical one going forward since I already spent time on it after discussions repo was closed.

yawpitch commented 5 years ago

Need to add https://github.com/exercism/python/blob/master/bin/generate_tests.py to this list

The discussions repo got reopened so now there are two editable versions of this issue, but I'll decide that this one will be the canonical one going forward since I already spent time on it after discussions repo was closed.

Added the Python generator + details.