Open annedino4 opened 2 years ago
It's unclear exactly what's being proposed here, in the general case. To what extent does this overlap with Eval mode? Are we only suggesting that we show as far as substituting a value for a lambda-bound variable x
? The animation seems to go one step further and highlight the matching branch in any match
expressions scrutinising x
.
To be clear, I like the idea, but we need to think about exactly what we're proposing for more complex functions.
It's unclear exactly what's being proposed here, in the general case. To what extent does this overlap with Eval mode?
Think of this as potentially being Primer's version of unit testing, or a replacement for a traditional REPL: the student wants a quick way to see what happens when they give the function a particular input value(s). In Vonnegut, in order to do that, the student has to write an expression that applies the function to those test input values. IMO, that's not ideal for at least the following reasons:
On that last point: eval mode shows you the process that is generated when you evaluate any arbitrary expression. I don't envision this visualization showing any reduction steps, only atomic substitution and pattern patching, and only in the body of the function they're testing. Using the example of not
shown above, I'm imagining that if you provided an input value of and (or (or (and true true) false) true) false
, this feature would not show the steps involved in evaluating that expression: it would simply evaluate it (false
) and then show which pattern it matches in not
. We might even want to limit this mode's input values to literal values of the input type(s), in order to keep things really simple.
Are we only suggesting that we show as far as substituting a value for a lambda-bound variable
x
? The animation seems to go one step further and highlight the matching branch in anymatch
expressions scrutinisingx
.
Don't take the proposed design too literally — this is only a first pass, and there's a lot yet to think about.
We discussed this in more detail in our 2022-10-24 developer meet-up. I think the consensus was that this seems like a reasonable idea, if for no other reason than it may be simpler to understand (and probably simpler to follow) than a full-blown eval mode: in this mode, the original tree representing the function being tested remains in place, rather than being devoured as trees are in eval mode.
The main concern is whether the time spent implementing this would be better spent working on eval mode. My personal view is that this implementation seems much more feasible for a 1.0 than full eval mode, and therefore it may be better just to go for this one and have a reasonable chance of shipping it, versus going for full eval mode with a higher risk that we won't ship any interactive evaluation mode at all.
(Ideally, we'd be able to reuse some of the work we do for this mode in full eval mode, but they're pretty different, so I don' think we should count on that.)
Some benefits this "inplace" eval visualisation may have include
Some drawbacks:
f x = even (x+1)
for x=3
could not see inside the definition of even
, and could only show that it evaluates to even 4
and then to true
(though perhaps some clever design could ameliorate this: could have a "look inside this application" which switches to a similar visualisation for even 4
(i.e. move to the definition of even
and visualise the input 4
))trace1 = f 3
(which persists) to visualise the evaluation of. In this mode one works by "annotating" a definition. It is not clear how/if this should persist (across reloading a session; across modification to the definition; across visualising some other inputs and then wanting to come back)
Julia suggested we design more feedback on the canvas and provide the students with the option to "use" the function as soon as they finish writing it. We agree that's a way to improve engagement, and we have the advantage to visualise it easily with the trees.
It can also help students understand that variables are placeholders for the values with the right type.
I have done some design demonstrating the idea(Full design) Also animation of the visualised process. (Click on "Visualise" to start.)
However, it's a different approach considering eval. The "function Machines" does not change during the process. It's mealy a machine that takes inputs and produces outputs. On the other hand, eval is a process that changes the whole definition into a value.