hackworthltd / primer-app

Primer's React frontend application.
GNU Affero General Public License v3.0
3 stars 0 forks source link

Testing Mode (Apply function to values by substituting variables to values on canvas) #380

Open annedino4 opened 2 years ago

annedino4 commented 2 years ago

Julia suggested we design more feedback on the canvas and provide the students with the option to "use" the function as soon as they finish writing it. We agree that's a way to improve engagement, and we have the advantage to visualise it easily with the trees.

It can also help students understand that variables are placeholders for the values with the right type.

I have done some design demonstrating the idea(Full design) Also animation of the visualised process. (Click on "Visualise" to start.)

Screenshot 2021-10-26 at 10 54 33 am

However, it's a different approach considering eval. The "function Machines" does not change during the process. It's mealy a machine that takes inputs and produces outputs. On the other hand, eval is a process that changes the whole definition into a value.

georgefst commented 2 years ago

It's unclear exactly what's being proposed here, in the general case. To what extent does this overlap with Eval mode? Are we only suggesting that we show as far as substituting a value for a lambda-bound variable x? The animation seems to go one step further and highlight the matching branch in any match expressions scrutinising x.

georgefst commented 2 years ago

To be clear, I like the idea, but we need to think about exactly what we're proposing for more complex functions.

dhess commented 2 years ago

It's unclear exactly what's being proposed here, in the general case. To what extent does this overlap with Eval mode?

Think of this as potentially being Primer's version of unit testing, or a replacement for a traditional REPL: the student wants a quick way to see what happens when they give the function a particular input value(s). In Vonnegut, in order to do that, the student has to write an expression that applies the function to those test input values. IMO, that's not ideal for at least the following reasons:

  1. It's pretty cumbersome for a quick test. You have to a) create a new definition; b) define the type that the expression will evaluate to; and c) build the expression.
  2. That test expression now lives on your canvas, takes up space, adds cognitive load, etc.; but presumably you just wanted to test a few values to see how the function works, just as you would if you had a traditional REPL. In other words, in most cases, this is just a throwaway expression that you don't want to keep around.
  3. It assumes students have a good understanding of what function application is, how to use it in the UI, how to figure out the value that the function application will have, etc. Obviously, we eventually want them to understand all of those things, but that's a lot extra formal concepts that a student in their first lesson or two will simply not have enough time (or working memory) to understand. We've already observed that students struggle with the, "what is the value of this application?" bit, at least. I suspect we can make this design intuitive enough that students can understand "what happens when I give the function this input value?" without much formality needed.
  4. It's a different way to visualize simple evaluation and function application, which could be helpful, pedagogically.

On that last point: eval mode shows you the process that is generated when you evaluate any arbitrary expression. I don't envision this visualization showing any reduction steps, only atomic substitution and pattern patching, and only in the body of the function they're testing. Using the example of not shown above, I'm imagining that if you provided an input value of and (or (or (and true true) false) true) false, this feature would not show the steps involved in evaluating that expression: it would simply evaluate it (false) and then show which pattern it matches in not. We might even want to limit this mode's input values to literal values of the input type(s), in order to keep things really simple.

Are we only suggesting that we show as far as substituting a value for a lambda-bound variable x? The animation seems to go one step further and highlight the matching branch in any match expressions scrutinising x.

Don't take the proposed design too literally — this is only a first pass, and there's a lot yet to think about.

dhess commented 1 year ago

We discussed this in more detail in our 2022-10-24 developer meet-up. I think the consensus was that this seems like a reasonable idea, if for no other reason than it may be simpler to understand (and probably simpler to follow) than a full-blown eval mode: in this mode, the original tree representing the function being tested remains in place, rather than being devoured as trees are in eval mode.

The main concern is whether the time spent implementing this would be better spent working on eval mode. My personal view is that this implementation seems much more feasible for a 1.0 than full eval mode, and therefore it may be better just to go for this one and have a reasonable chance of shipping it, versus going for full eval mode with a higher risk that we won't ship any interactive evaluation mode at all.

(Ideally, we'd be able to reuse some of the work we do for this mode in full eval mode, but they're pretty different, so I don' think we should count on that.)

brprice commented 1 year ago

Some benefits this "inplace" eval visualisation may have include

Some drawbacks: