Closed angelinahli closed 6 years ago
@angelinahli That would be very helpful!
@iHiD and @nicolechalmers are working on some design changes that could help with this, but we would need some resources to point people to.
Do you have a list of the common mistakes that you see in the first exercises in Python?
I could see this as individual blog posts for each mistake to expand on the whys and wherefores, and then one of those "list posts" (10 top mistakes... so buzzfeed-y) that lists each mistake and points to the expanded versions. Or something.
Anyway, this would be very valuable.
@kytrinyx Does Rikki work in Python?
@angelinahli If you're not familiar, Rikki is a bot that provides guidance on common issues. As part of the new version it will give suggestions to mentors, and I'd love recommended reading to feature in those suggestions. We're also bringing extra reading to be more of a first-degree citizen in the the whole mentoring process with it built into the UI etc.
@iHiD We haven't written a rikki module for Python. I'd love to have one that catches these common early mistakes.
@kytrinyx @iHiD this all sounds great!
@kytrinyx, I don't have a list of common mistakes off the top of my head (other than, for the leap problem set the if bool return True else return False
issue I've seen many times), but one thing I've noticed is that people tend to duplicate built in functions they don't know exist, so it might be worth pointing people to the Python docs on string functions for string fn heavy problems, or the dict docs for dict heavy problems, etc.
Also, one thing I've noticed is that some problem sets seem to lend themselves heavily to certain built in modules - so the re
module is really helpful for completing the word count exercise, and the datetime
module is helpful for the gigasecond exercise, but this is the kind of information a beginner might not know to look for. Another simple fix might be just to point to the relevant docs there, either in the introductions attached to each exercise or after someone has completed the problem :)
By the way, if you do end up recommending the re
module to people (which I've found helpful several times so far in completing exercises), a crash course in regex syntax might be helpful further reading to include!
One thing I think would be useful in general is leveraging the track example solutions to showcase "best practices" or standard library features/modules that a beginner is unlikely to know to look for. In my experience, maintainers write pretty high-quality and idiomatic solutions.
On a slightly separate note, I'm not sure what proportion of your userbase consists of seasoned programmers vs. complete beginners, but I could see a "How to debug your code and get help when you get stuck" document, attached as a URL in every README document, and including basic advice like "when your code isn't working the way you think it should, print stuff out", as a potentially useful thing small change! Someone might even have already written something like that :)
a crash course in regex syntax might be helpful further reading to include!
I love the https://regexcrossword.com/ site for practicing/learning regex
"How to debug your code and get help when you get stuck"
Yes!
@stkent by track example solutions, do you just mean the solutions other people have written to the same problem set? Actually, one easy fix might be to pin solutions that have been liked several times to the top of the general solutions tab, instead of just sorting by most recent (which seems to be the default), to serve as model solutions for others to look at.
(and perhaps automatically pin solutions a user has individually liked to the top of the solutions page as well for easy safe-keeping!)
@angelinahli not quite; when a language track implements an exercise, it is common (as part of that implementation) for the writer to also include an example implementation that validates the test suite is 'correct'. That example is not visible to track participants, but I am not sure why not. I love your idea of a mechanism that helps to surface the most interesting/thought-provoking submitted solutions too! We might need to tweak the description around the existing 👍 or 👎 mechanism to make it clear that an upvote means something like "this solution taught me something/presented an alternate framing of the problem" rather than "I like this and it appears to work".
There are a couple of discussions that you might enjoy:
One thing to bear in mind is that we're completely redesigning the site from the ground up, where instead of asking ourselves "should we sort by the most upvoted solution" etc (which is the sort of question we've asked ourselves before, typically), we're asking "what are people trying to achieve?"
So far this has been pretty enlightening (I have no experience with product design or user experience), and what has often happened is we start with a vague idea of what the topic is (e.g. "what does progression mean in an Exercism language track?"), end up with 30 or 40 questions about the topic, and then through an hour (or 5 or 10) of discussing each of the questions, the whole topic boils down to two or three important, fundamental ideas/concepts/trade-offs that drive the design choices (the result of the "progression" discussion can be read here: https://github.com/exercism/docs/blob/master/about/conception/progression.md)
As I see it, the core of this discussion is about how to best help people learn the style, conventions, idioms, and standard library of a language.
The ways that people currently learn is:
This raises all sorts of thoughts, but some of the core ones are:
We discuss our thinking around feedback here: https://github.com/exercism/docs/blob/master/about/conception/code-review.md
In terms of automation, we've determined that the automation should help reviewers (it's fine to be impersonal there), and we should make it easy to reuse snippets and link to common resources (workflow optimization).
In terms of learning from other people's code, there are trade-offs as well. We don't want to just link to the top-most upvoted things, because that can start voting wars, or also make visible things more visible and invisible things impossible to find.
It goes on and on, and it's fascinating, and it also means that we're often not discussing things like mechanisms for voting and what the wording or icons would be until the last 1% of the design process.
Thanks everyone, for the thoughtful comments here. We're going to be doing a lot more around mentoring resources and tools, and if you have feedback about specific exercises, I'd suggest that you open an issue either in the mentors repo https://github.com/exercism/mentors
We'll be working on better tools to surface interesting comments, thoughts, solutions, etc -- but that will be more directed discussions. For now I'm going to close this.
I'm working through the Python language track right now, and have noticed that, especially for the first few exercises, people tend to make the same coding mistakes in the beginning (e.g. submitting
if bool return True else return False
instead ofreturn bool
). While the tests are great for helping people spot errors, they're less helpful for helping people write better code quickly.One fast way to fix this might be to direct users, at least for the first few problems they complete, to a list of common mistakes people have made on this problem set so that they can improve their code in the future.