Open dogweather opened 5 years ago
The solutions do have a metadata file, but it doesn't say what the solution would be named.
Sorry, accidentally submitted before I was done typing.
The long and short of it is that with over 120 possible exercises in over 50 different programming languages, we can't currently infer the exercise name. It would require an extensive change in over 60 different repositories to accomplish this, and at the moment I think that the effort of implementing and maintaining it would be more than the perceived gains in terms of UX.
I'm going to leave this open for a bit in case someone can think of how to do this more easily.
Oh! I didn't think to check for a metadata file. And it has a unique identifier for the solution:
{
"track":"python",
"exercise":"pangram",
"id":"3f6863eba984439fafcb595be59f7d7e",
# ...
}
So one solution could be a new server endpoint which returns the required exercise filename, given a solution id. And then submit
would first call this to discover which file to submit.
I'm guessing that the server (1) can retrieve and exercise given a solution, and (2) has the knowledge of which
The API currently does not have any knowledge about the expected names of solution files. Most language tracks follow a convention, but it's a different convention from programming language to programming language, and we have no way of guessing. It's not currently written down in a way that is programmatically accessible.
Some language tracks require submitting multiple files.
I don't see a way of doing this that doesn't require quite a lot of work and make things quite significantly more complex.
My half-baked ideas of how to do this were at https://github.com/exercism/cli/issues/370#issue-207011775. I ended up going with a local list of rules hard-coded per language. It has served me well enough. https://gist.github.com/petertseng/e3e88bf1c383865ff67f4095413993b2. But I did not submit it as a PR for this repo because 1. It is written in the wrong language for this repo 2. Bad UX for languages it doesn't know about.
My reading of the relevant issues suggests that everyone who encounters this idea likes it, but we haven't figured out a viable implementation strategy. Let me suggest an implementation strategy:
exercism submit FILE1 [FILE2 [...]]
behaves exactly as currently: it submits exactly the set of explicitly named filesexercism submit
with no arguments finds all files matching patterns from a list of globs, and submits thoseOne nice thing about the way Exercism works is that each exercise, for each track, gets a directory of its own. Therefore, I believe that it should be safe to apply all globs for all tracks to all files in any particular exercise directory; most will be misses, but the globs for the track in question should be hits.
Note: that's an assumption which might potentially be falsified. If you know of any counterexamples, I want to hear about them. If the assumption is in fact falsified, it shouldn't be all that hard to split out globs by track, as the exercism tool already knows what track it's dealing with for any given exercise, but it's even easier if we can just skip that step.
A strength of the exercism
CLI tool is that it's a single portable executable; we can't mess with that by putting config files alongside it. Luckily, there are tools (1 2) which embed external files directly into the built executable. This enables us to write a config file in TOML (or whatever), embed, it, and read it at runtime.
The syntax is very vulnerable to dogshedding, but it could look like this:
[language_globs]
rust = ["Cargo.toml", "Cargo.lock", "src/**.rs"]
So I tried to combine @coriolinus and @petertseng proposal
Here's the link https://github.com/chocopowwwa/exercism-cli/blob/issue-824/config/autodetect.go
And I'm struggling to create the test for submit.go
, seems like ctx.exercise
require a file name path but my function require an exercise object in order to get the file names, but IDK haven't really dig into the source code yet 🤷.
Another approach is to store the file name convention of each track instead of using glob, using the convention to transform the exercise name into solution file name, that way we can precisely determine the solution file name and sort of* enforcing good naming convention for each solution.
Looks plausible! Once you have the tests going, I'd be interested to see the PR; feel free to tag me onto that when it's ready.
I think globs are a much better idea than file name conventions simply because there are plenty of languges for which there is no strong file name convention; I think globs are much more powerful. Also, IMO, enforcing "sort-of" good naming convention is out of scope for a filename inferer.
I'd like it if, as a student, I could simply run:
...and it'd infer that the argument is
two_fer.py
. I think this would be big UX win. I don't know, though, if any exercises are so complex or oddly named that the solution filename couldn't be inferred. (?) If this is a problem, we could introduce an exercise metadata file. This could also help #230:Open to a PR for this?