exercism / elixir-analyzer

GNU Affero General Public License v3.0
30 stars 31 forks source link

`Guessing Game`s automated feedback is contradicting #419

Closed kronn closed 5 months ago

kronn commented 5 months ago

Hi there

I currently have fun working a bit through elixir again. Today's "Guessing Game" left me a bit unsatisfied.

I can complete the exercise either by following the advice of the exercise (having a default argument) or by resolving the compiler warnings (which leads me to not using a default argument)

My favourite solution is this:

defmodule GuessingGame do
  def compare(secret_number, guess \\ :no_guess) when guess == :no_guess, do: "Make a guess"

  def compare(secret_number, guess) when secret_number == guess, do: "Correct"

  def compare(secret_number, guess) when abs(secret_number - guess) == 1, do: "So close"
  def compare(secret_number, guess) when secret_number < guess, do: "Too high"
  def compare(secret_number, guess) when secret_number > guess, do: "Too low"
end

Which makes the compiler warn me:

warning: variable "secret_number" is unused (if the variable is not meant to be used, prefix it with an underscore)
  lib/guessing_game.ex:2:15

warning: def compare/2 has multiple clauses and also declares default values. In such cases, the default values should be defined in a header. Instead of:

    def foo(:first_clause, b \\ :default) do ... end
    def foo(:second_clause, b) do ... end

one should write:

    def foo(a, b \\ :default)
    def foo(:first_clause, b) do ... end
    def foo(:second_clause, b) do ... end

  lib/guessing_game.ex:4:7

Granted, I could have prefixed the unused arguments. I didn't do this, so the visual rhythm feels a bit better.

Especially with guards, the compiler is not helping with the goals of the exercise (guards, default arguments, mulitple clauses).

I do not have a good solution for this problem, sadly.

jiegillet commented 5 months ago

Hi @kronn,

I'm not sure I'm following, please check my understanding. The compiler is advising you to write

    def foo(a, b \\ :default)
    def foo(:first_clause, b) do ... end
    def foo(:second_clause, b) do ... end

which in your case would translate to

defmodule GuessingGame do
  def compare(secret_number, guess \\ :no_guess)

  def compare(secret_number, guess) when guess == :no_guess, do: "Make a guess"
  def compare(secret_number, guess) when secret_number == guess, do: "Correct"
  def compare(secret_number, guess) when abs(secret_number - guess) == 1, do: "So close"
  def compare(secret_number, guess) when secret_number < guess, do: "Too high"
  def compare(secret_number, guess) when secret_number > guess, do: "Too low"
end

Which is idiomatic Elixir. The first clause still uses a default argument, doesn't that fulfill the requirement of the exercise? It's not required to write guards for every single clause, is that the source of your question?

(if we were in a mentoring session, I would probably show you my preferred way to write the next two clauses

  def compare(_secret_number, :no_guess) , do: "Make a guess"
  def compare(guess, guess), do: "Correct"

but I know that's not your point here.)

kronn commented 5 months ago

Oh. Hmm. Well... So...

It seems that I am clueless and was too quick to open an issue... The idiomatic elixir you provided me with is indeed without further comments. And the out-of-band mentoring-solution is really nice. I did not know that you could that. That kind of solution makes me want to read the elixir-source :-)

In my defense, I found the automated feedback confusing. Sadly, because I did things wrong. I misunderstood the "signature only"-method. Would it make sense to point to human mentoring instead of suggesting to open an issue? Or, depending on the hour of the day suggest to sleep on it and come back tomorrow fresh? I honestly thought the automated feedback looked wrong...

I am open to closing this issue if you agree that the error was mostly on my part.

angelikatyborska commented 5 months ago

Would it make sense to point to human mentoring instead of suggesting to open an issue?

That's a very good idea actually 🤔

jiegillet commented 5 months ago

It's all good, you are learning Elixir, the feedback tells you to open an issue if you think it's wrong and you did :)

I also think pointing to mentoring is a good idea, as long as we keep a link to this repo anyway in case of an actual bug. Let's repurpose this issue towrads that goal.

kronn commented 5 months ago

So you need a PR for https://github.com/exercism/website-copy/blob/main/analyzer-comments/elixir/general/feedback_request.md, right?

How about

If this automated feedback is confusing, you can request human mentoring, if it just doesn't look right, please open an issue in the exercism/elixir-analyzer repository.

I tried to match the link to the one when there is no automated feedback. I might be obvious that I do not know much about exercism's internals or any existing templating being applied here.

angelikatyborska commented 5 months ago

So you need a PR for exercism/website-copy@main/analyzer-comments/elixir/general/feedback_request.md, right?

Yes

I tried to match the link to the one when there is no automated feedback.

Hmm... If the link is relative, then we would need to know to which page it is relative. I found at least two different pages where the feedback can appear, so it's not possible to create a valid relative link:

https://exercism.org/tracks/elixir/exercises/bird-count/iterations https://exercism.org/tracks/elixir/exercises/bird-count/edit

We could create a valid absolute link though. The analyzer can pass parameters to the comments. Here's an example:

Here's the code that appends the general_feedback comment https://github.com/exercism/elixir-analyzer/blob/9cd34dc4d3fae061611373b6ddef67eece9b1164/lib/elixir_analyzer/submission.ex#L103-L108 and the submission struct there should contain submission.source.slug, which is the exercise slug, which should be enough to produce a valid URL to request mentoring.

But we can also just skip having a link altogether.

If this automated feedback is confusing, you can request human mentoring, if it just doesn't look right,

Let's use the same wording as is on the website. The main call to action button on the website is now called "submit for review", so let's say something like:

If this automated feedback is confusing, you can submit your code for review by a human. If you're sure this automated feedback is wrong, ...