Closed loziniak closed 2 years ago
(edit: moved the checklist to first issue post, to track checklist progress)
I will work on the hello-world (1)
It already has a solution, but of course you can solve it as a learning example. Most needed are solution for exercises with empty checkbox.
I believe you can find some of the solutions in the Rosetta Code. For instance, the solution for the Roman Numerals problem: http://www.rosettacode.org/wiki/Roman_numerals/Decode#Red
I will try out darts
if someone else isn't already doing it. I'm curious about the workflow for the tests. Is the suggested way to increment the ignore-after
field each time the tests all pass and run them again?
@dander exactly. This is a workflow in every Exercism track. Although nothing can stop you from running all the tests from the very beginning. I suspect it's a sort of TDD good practice perhaps. BTW nice to see you involved!
@wallysilva I took a roman-numerals solution from Rosetta Code, thanks for advice!
I've started working on sgf-parsing
I've started working on sgf-parsing
How is it going? Perhaps we could go live soon, do you want to finish that? If not, I'll take it.
I had to stop for a while because of general life stuff going on, but I've been trying to get back to it lately. I was finding it a bit tricky to indicate in the tests what was wrong with the outputs in a clear way. Also, trying to figure out an appropriate way to handle expected errors. Have you encountered that in some of the other challenges? To clarify what I mean, some of the tests have expected values containing nested data structures
expected: #(
properties: #(
A: ["b"]
C: ["d"]
)
children: []
)
While some have an 'error' property with an associated error message
expected: #(
error: "properties without delimiter"
)
I'm interpreting that to mean that the test should trigger a 'user error with that message.
One thing I've been a bit conflicted on is whether the tree-like structure should be the strict map/block structure above, or more flexible, since there could be different kinds of solutions.
Yes, this error pattern appeared for me in largest-series-product: 79a9cf08ca76 . You should be able to throw
a map with error key, or cause-error
with appropriate message. I extended a testing "framework" lately (009679130bd87f5cfb27f5240013124bc94a95cc), but errors should still work.
To satisfy structured output expectations, I would suggest you just return a map from tested function.
I pushed up an example solution for the sgf-parsing
exercise (finally). #62
I initially wanted to use parse with collect/keep, but since the outputs expect nested maps, and collect only can generate blocks, I wasn't sure how that would work. So instead I used a stack to keep track of the current location in the data structure to insert child nodes. I found this problem to be quite difficult to get right. I'm not sure that using parse
is the easiest solution for it, but it seems like a natural place to display the feature of the language.
I'm looking into adding the project metadata pieces. I am considering adding concepts for parse
, and recursion
. Is there a catalog of existing concepts somewhere that I should reference? Is there anything I need to know about the uuids, or do I just generate a new one?
@loziniak I think #62 is ready to be merged, if it looks good to you. I ended up adding stubs for a parse
concept, but removed the recursion
one. Though I suppose it could be solved without parse.
There is no central point for concepts. For me it felt natural to just solve exercise examples and look if I could need any new concepts to explain it. So, it seems just as you did with parse
. I have some initial work done to start with basics and evaluation concepts. Do you think about working more on concepts? It's a great feature, perhaps we could add concepts one-by-one. There is a task for it: #37 .
UUIDs can be generated offline by hand, they just need to be unique througout the project. You can use configlet for this, or just any online or sytem tool you prefer. Also, during track unit tests, configlet is used to check for uniqueness of UUIDs, so all errors are caught.
I will consider contributing to the concepts. I just need to be weary of how big a bite I take.
Configlet is pretty cool. I discovered it when the pull request generated a failed configlet run.
Just pushed last ex. from the 20 yay!
At least 20 Practice Exercises are needed to launch track.
checklist:
exercise name (so-called "slug") is linked to a description. task's difficulty is in parentheses. the choice is arbitrary, based on personal reception of exercise description. it's open for discussion/change. this list is chosen randomly from Exercism's problem database. it's sorted by difficulty, and this order should be kept also in
config.json
.instructions:
exercises/practice/<exercise-slug>/<exercise-slug>-test.red
change comments like this, to test example solution:exercises/practice/<exercise-slug>/.meta/example.red
,test-init
function from1
to how many tests you want to run in<exercise-slug>-test.red
.test-init
line: uncomment solution file, comment example file and changelimit
to1
(second argument).config,json
. If you want, addpractices
andprerequisites
concepts. Copy exercise's config to proper position, so that all exercises are sorted from easiest to toughest.