Closed iHiD closed 5 years ago
Sidenote:
In general I agree with this.
Another negative:
module
over class
etc).enforces the file name of the solution, which reduces complexity for the auto-mentoring maintainers.
- No, it makes it much more likely. I personally would go the way of the
javascript-analyzer
: it searches forexercise.js
but if it can't be found it will use whatever.js
it can find that is not aspec
file.
Python has had stubs for a while now.
I can attest to the benefit of knowing where to start. Whenever I start a new track in a language I haven't worked with before, I like to have something to point me the in right direction so I don't have to read the test suite or project files to figure out what to name my solution file.
This is something the java track has historically had issues with. We settled a while ago to provide stubs on all exercises with difficulty 4 or less.
On the first exercise with difficulty 5 we have a hint in the readme explaining how to add the stubs and why they suddenly disappeared. We also keep the solution file structures fairly similar between exercise, making it easy for the student to look at previous solutions on how to structure the file.
I don't think this affects the auto-mentoring for java track too much, as each solution file only has one name that will compile successfully with the tests.
I like the thought of removing stubs progressively, or removing them at a certain difficulty level like the java track does.
Having stubs enforces the file name of the solution, which reduces complexity for the auto-mentoring maintainers.
Aren't students able to split their work in multiple files and submit them all, though? I'm not sure if the analyzers will be able to assume there's only one file 🤔
I agree with this change specially for earlier exercises. I experienced it as an Elixir beginner in the Elixir track and I appreciated the confidence the stub gave me on that initial exercise.
I'm not sure if the analyzers will be able to assume there's only one file 🤔
They probably will be refer_to_mentor
for now.
Aren't students able to split their work in multiple files and submit them all, though?
Yeah, probably, but some tracks make this easier then others, as well as language tooling might influence this ability.
In Erlang we have a strict relationship between module- and filenames. In elixir the name of the script under test is hard coded in the testsuite, the same is true for the bash suite. Probably even for many other languages as well. And so far I have not yet seen any exercise that would require to split implementation into multiple files (except for those languages that have a header file concept)
Someone who has an abundance of time (wishful thinking on my part) would be able to read the linked issues from https://github.com/exercism/discussions/issues/114 and summarise any findings they get. Here is what I recall from that issue:
In general, we can see that there was a pattern that most tracks that had a discussion tended to decide in favour of having stub files. By nature of the survey method, there is no data for tracks that did not have a discussion on the record on GitHub.
From this past discussion, I retrieve one point in favour of stubs and contribute it to this discussion:
Reduced annoyance for students: We have a quote that creating the files is supremely annoying. Especially true for tracks where the file needs to go in some deep directory structure.
Subnote on reduced annoyance applicable for statically-typed languages in an exercise that asks the student to implement > 1 function: Typically the entire test file must compile before it can be run; choosing not to provide a stub file means the student has to write this stub for all tested functions, rather than working on one at a time.
Super helpful. Thank you, @petertseng. I'll add that link to my original post.
As a student, I find it annoying as well (I brought this up @iHiD in Slack). Maybe not supremely annoying, but it seems like an unnecessary chore to have to do it on every single exercise.
While it doesn't take much time or effort to create a new file, if you consider that the ruby track has 90+ exercises, that a lot of wasted effort. Time is scarce for students, and we should find ways to help them use their time effectively.
Can I step in a give a counter example: In Pharo I don't want to give a stub - the reason being that coming from the source of TDD - the idea is that when you hit an error your environment helps you correct it. So the Smalltalk way is to run the first test - hit an error (the model your test references is not defined) - and then click on the "correct/create" button and the class is defined and execution continues... the next error being - the method you called is not defined, again click the create button and boom - new method and the debugger stops on the error "implementation not defined".
I think the intent is still the same - the student needs to be productive and start writing code as quickly as they can, but I think specifying the implementation isn't always the correct way. Of course in many languages - a template is the right way, but not all languages.
@iHiD cc @kytrinyx question regarding updating exercise to latest version.
When a student updates their exercise; how does it determine which files to overwrite and which files to leave alone?
I'm not using stubs right now so this has never crossed my mind. I'm very interested to know the answer too.
When a student updates their exercise; how does it determine which files to overwrite and which files to leave alone?
From my observations, user submitted files are kept as they have been submitted, everything else gets synced with upstream.
This is annoying when the name of the file you are supposed to implement changes for some reason...
From my observations, user submitted files are kept as they have been submitted, everything else gets synced with upstream.
Which means that if they have added tests, they will never get new tests. Which also means that if we add stubs it will not overwrite their submissions.
From my observations, user submitted files are kept as they have been submitted, everything else gets synced with upstream. Which means that if they have added tests, they will never get new tests. Which also means that if we add stubs it will not overwrite their submissions.
All these points are correct.
When students add tests, do you not ask them to create a separate TestCase? That’s what I want students to do in Pharo (but you’ve reminded me I need to make that more explicit in our track docs). The reasoning is that the separate test case then gets uploaded as part of their submission for review.
As most of us are using filedependant languages, and that its much simpler for students to just add a single function/test description to the existing file, rather than to create a complete new file which has to obey naming rules to be found be the testrunner while also having to include boilerplate code into this new file to set up the testing library and or doing before/after hooks for the tests, and and and…
Which means that if they have added tests, they will never get new tests.
This is true regardless if you have stubs or not.
The C track does something similar to what @Smarticles101 described for the Java track. Creating files is a key part of learning how to C good. The test file on the track make it evident what file/filename is required to be implemented.
I understand that this might be different in different languages. The fact that this difference exists should show that a blanket mandate that all language tracks provide files in a given way could reduce the value of those tracks for which it is not needed.
An alternative might be to expose some metric on how many/much students struggle with this for any given track and thus handle it per track?
Creating files is a key part of learning how to C good. The test file on the track make it evident what file/filename is required to be implemented.
I don't see why the C track is different from other tracks to be honest. Creating a file is a key part of using any programming language that uses source files (which most do). However, exercism aims to teach fluency in a language, not to know how to use an IDE or build system. I understand that creating files is an important skill to have, but I don't think it is something exercism should be teaching (or at least not for the vast majority of exercises, maybe only for the later exercises). An alternative take on this is that we are teaching people to become fluent that already know how to program! This means that it is already extremely likely that our students know how to create a file, and we are then forcing upon them a repetitive action that doesn't teach them anything new.
I don't see why the C track is different from other tracks to be honest.
This.
I'm not a regular contributor to the Perl track, but we use it (and Exercism Teams) at work as part of training new hires, since most come without a Perl background. Some Perl track contributors prefer that there are no stubs for approximately the same reason that @wolf99 gives, and it is a valid reason. But I've witnessed how it affected our newcomers, and I don't think the gain is greater than the annoyance.
(Sample bias: I've tried this with 5 hires, and they were all well-versed in one or more other languages, but they generally had problems figuring out what to put in the file, since there are no header files in Perl. Maybe the C track doesn't have that problem. Edit: And maybe it's hard to know because of missing feedback mechanisms at that particular point.)
My strong opinion is that the positives outway the negatives.
So the issue is not as much "It's more useful on track X" as it is "The maintainers of track X have autonomy to decide". The Exercism Project generally gives a lot of freedom to track maintainers, and I believe this is a strong motivator when you don't get paid.
So if you can't convince everyone that the positives outweigh the negatives, should there be a kind of voting of global policies and get done with it? Or in the case of being able to predict what the expected filename is, can we at least agree that even if the file is missing to begin with, a solution's files must be named predictably? Just like "solution_pattern": "example.*[.]hs"
occurs in config.json
, we could have another key that guides the exercism
CLI towards warning students that the proper file(s) were not submitted. Edit 2: Or some other way to see which files were touched since the exercise was downloaded.
(A side note on auto-mentoring and file prediction: The Haskell track hasn't got an auto-mentoring tool yet, but one problem that will eventually occur is that iterations that use external libraries are rarely accompanied by a package.yaml
with the proper library inclusion. So I foresee that the problem extends to guiding the student towards submitting all the files that are technically necessary. Maybe others have found a similar problem and a way to deal with it.)
The Exercism Project generally gives a lot of freedom to track maintainers, and I believe this is a strong motivator when you don't get paid. So if you can't convince everyone that the positives outweigh the negatives, should there be a kind of voting of global policies and get done with it?
If there's a question where there is no consensus (e.g the name vs description on problem-specifications right now, and seemingly this), and a decision needs making, then the leadership team will make a decision considering what everyone has said, and our wider knowledge/opinions of Exercism. However, as both Katrina and I have had a really busy couple of months that's not happened (as making a decision where lots of people disagree needs time and thought to get right) so there's a couple of issues like this that are lagging.
With an issue like this, where I've specified a strong opinion up front and there is no overwhelming disagreement, it would probably, need a new "con" that I've not thought of in the introduction for me not to decide to move the policy forward.
Fundamentally, it would take a lot for us to override a strong consensus by the maintainers, but if there's not one, then we'll just make the decision that we feel leads to the best experience for Exercism's users, with the least burden for the maintainers.
The summary of the below comment:
Given the above observations that:
I inferred that the leadership would prefer to only to invoke authority when necessary (of course, this is only an inference, worth what you paid for it).
One sort of situation where it is possible to minimise the burden on the leadership is when:
In these situations, no consensus is necessary; maintainers that choose not to make choice X presumably have judged that they are willing to accept not having Y benefit, and no harm is done.
A common pattern in recent choices that lie outside the above category is: A choice made by maintainers may affect students and/or mentors. In these situations the natural alignment of incentives doesn't exist. Since maintainers are not necessarily students nor mentors, they aren't personally feeling the disadvantages of not making a certain choice. This is when the aforementioned wider knowledge comes in.
I don't know if it's possible to create more natural alignment of incentives, but if it were then more decisions would be easier. So I suppose the best I can do for now is to encourage looking for situations where it is possible to better align the incentives.
Thanks @petertseng. That's helpful. I have one point to add clarity to. There are roughly two different areas of Exercism;
(These are badly named, as both are open-source, and both are really about product, but hopefully the distinction is clear enough.)
We have a very firm grip on the product side (e.g. we don't generally accept PRs for the website, we say "this is the direction we're taking this feature" etc). That's because product work is best done by specialistic individuals or tightly formed teams, and also it often requires full-time effort.
In contrast we aim to have a light touch on the "open source side". We are blessed with a variety of different people that maintain the majority of Exercism's code, and those variety of opinions tend to mean that everything self-regulates. In fact generally the maintainers know better than the leadership team what is actually needed. When people cannot self-regulate, what we really need is someone to say "Katrina, Jeremy - please can you make a decision on this", and then we'll do that.
This issue is slightly different, as I wrote it, with a firm opinion, asking for anything I might have not considered. It's also really a product question - "is this better/worse for students". The main reason I put it up for discussion rather than that as an announcement of something we're doing was that it impacts the maintainers' time as they will need to create stubs.
I think the issue here is that I should have made a decision a while back and closed the issue, but I've had a very intense two months and got behind on things, so that didn't happen. I will rectify this in my next comment.
So thank you all for your pitching in your thoughts 💙
To conclude the discussion, I don't believe anyone has suggested anything new that wasn't in the OP. I agree that there is definite value in learning to create files, but I don't feel that that is something that Exercism should need to teach. The moment someone works on a project in the "real world" they'll learn how to create files if they don't know, and it won't be hard for them to work out. If we've tooled them up with everything but that skill, but made solving exercises less annoying for that person while learning, I'm fine with that situation.
So, I'm going to move forward with this proposal and ask all tracks to add stub-files to their exercises. When working through the Track Anatomy project, tracks can decide whether they want to provide content within those stubs, or just the empty files.
I'll work out with Katrina how best to communicate this to the everyone.
Suggestion to include stubs in Delphi track exercises
Hello, I would like to suggest that the Delphi track on Exercism include exercises with stubs, providing the minimal necessary structure (such as unit, class, class function, etc.).
Currently, the exercise files start completely blank, which can be extremely confusing for beginners. Additionally, many tests require the use of class function, and beginners might not be familiar with this syntax, increasing the difficulty unnecessarily.
I suggest that the exercise files provide the basic structure so that students can focus on the logic of the problem, instead of spending time setting up Delphi's structure. Here’s an example of what I mean:
delphi `unit SumFunction;
interface
type TCalculator = class public class function Sum(a, b: Integer): Integer; end;
implementation
class function TCalculator.Sum(a, b: Integer): Integer; begin // TODO: Implement the sum of the two values end;`
end. This would help:
Make life easier for beginners who are still learning Delphi’s syntax. Reduce confusion, especially in exercises that involve class function and class var. Improve consistency and reduce frustration for students who aren’t familiar with Delphi’s structural complexity. Thank you!
Hello. This is a 5 year old post. I suggest opening a new post on the forum (https://exercism.org/r/forum) with your suggestions. Thanks :)
Some tracks currently have stub files for exercises. Other tracks only provide the test file. I propose making stubs a requirement on all exercises on all tracks. My reasons are:
public
,static
,using
,class
andstring
. For Ruby it requiresmodule
orclass
anddef
- neither of which would necessarily be clear. Providing stubs reduces this hard learning curve.The three negatives I have heard to this proposal are:
My strong opinion is that the positives outway the negatives. I would also like to suggest adding the progression through reduced stubs into the Track Anatomy Project.
Previous discussion was here.
Have I missed any pros/cons? Does anyone have anything to add that I've not considered?
If people could :+1: and :-1: on this issue with their preference, I would appreciate it.