statelyai / studio-issues

Report issues found in Stately Studio
5 stars 0 forks source link

stately.ai editor does not respect guards #186

Open inverted-capital opened 1 year ago

inverted-capital commented 1 year ago

Description

In this example machine, a guard that always evaluates to false is taken in the simulator.

How can I get a machine that honors guard conditions ?

https://stately.ai/registry/editor/4284c1c1-0738-46da-968c-4bdd10076f81?machineId=5cd8903a-c81f-4013-bf77-76788b2fe159&mode=Simulate

Expected result

I should not be able to click thru events that have a falsey guard condition.

Actual result

I can click anywhere as tho the rules of logic do not apply to me

Reproduction

https://stately.ai/registry/editor/embed/4284c1c1-0738-46da-968c-4bdd10076f81?machineId=5cd8903a-c81f-4013-bf77-76788b2fe159&mode=Simulate

Additional context

No response

davidkpiano commented 1 year ago

This is currently working as expected, although it may be confusing in the code export (cc. @kevinmaes).

Clicking a guarded transition will first set the guard to true, so you can simulate taking that specific branch.

inverted-capital commented 1 year ago

Thanks @davidkpiano - is there any way to toggle this in the simulator, much as the old vizualizer did: https://stately.ai/viz/ ?

I found this useful when passing the state diagrams to other non technical users, since the guard rails showed them what was allowed or not.

inverted-capital commented 1 year ago

So just for context on our use case, we are using the studio for modelling and discussions about the layout, then back to vscode to code up the logic for the guards and actions that assign to context, then over to https://stately.ai/viz for final highly detailed discussions with the guard logic in place, then (!) back to vscode for generating test paths using @xstate/test to operate on our system under test.

It would be epic if this could somehow all be one thing, as we think it speaks to the potential of the startchart paradigm, that we would rather suffer all this than the old ways which are too disparate to even complain about in a single ticket.

Perhaps a simple start is a vscode plugin that syncs the code on disk with the stately.ai project ? With this plugin, devs could write machines that were broken up into several files and imported all kinds of weird libraries (ahem) since the plugin would load up local, then extract an inert json representation of the machine to be sent to stately.ai ?

davidkpiano commented 1 year ago

It would be epic if this could somehow all be one thing, as we think it speaks to the potential of the startchart paradigm, that we would rather suffer all this than the old ways which are too disparate to even complain about in a single ticket.

Completely agree with you, and that's what we're working towards.

We're currently doing some heavy refactors and feature additions that:

inverted-capital commented 1 year ago

Sounds like a great offering 😄

Test generation is of particular interest - where may I read more about the details of your plans in that area ?

davidkpiano commented 1 year ago

Sounds like a great offering 😄

Test generation is of particular interest - where may I read more about the details of your plans in that area ?

What we're initially thinking is the generation of test steps for chosen paths in a state machine, such as the shortest path from the initial state to any other desired/final state. The generated test steps would act as "scaffolding" so you can fill out the assertions and event executions in each test step.

Or perhaps just generating @xstate/test code.

What would you most like to see?

inverted-capital commented 1 year ago

I would most like to see feedback in the UI for the addition of infinite loops, or the impact of a change to the total number of paths, such as accidently adding another 5,000 paths that I didn't mean to. The biggest frustration I have is that the current way of using @xstate/test is that things are going nicely and then I make one change and nodejs runs out of ram.

How I feel carefully doing model based testing: image

How I would like this to look in the UI, is an overlay on each state that showed how many paths come in to it for each transition in, and how many paths go out of it, so I can quickly get a feel for the impact of my changes. I would also like it to turn red once it hit a threshold for number of paths, so I could see which change I have introduced that caused an unfeasible number of paths to be generated.

This single feature would vastly improve the speed with which we could make models for testing with, and avoid the feeling of sudden explosion wtf.

If you could make this feature, then you can make it more feasible to manage a model that has a large number of legitimate paths, particularly coupled with advanced features like chart dissection and realtime filter additions.

Now, 🥁, if you can provide tools to deal with a huge model with mental comfort, you have created a situation where the execution of a SUT may have become too large to complete on a single machine, so you could start selling parallel compute backed by lambda or something where I can write my SUT and verify it works thru a few hand selected paths, then upload to your platform to have it run with massive parallelism every time I make changes to the statechart and the SUT code, starting with a random sprinkling for quick feedback and concluding with complete assurance.

inverted-capital commented 1 year ago

As a simple start, maybe decorate each state with a counter of the adjacency map counts, and calculate it in passes rather than completing each state before moving on so the UI can show that it is still calculating, but it is at least at value x to give rapid feedback and provide a yield spot to keep the UI responsive. Extra for experts would be a multiplier guess to show how much each state amplifies the number of paths perhaps 🤷

The ultimate power would be to sit with a client, have them click thru some paths of interest at random, and have the UI instantly say: "there are 231 ways to arrive at this state, and all of them have been tested" and show a list and count of all the next paths that could be taken.

davidkpiano commented 1 year ago

@inverted-capital These are really great ideas. I'm going to distill them and document some of them in stately.ai/feedback (this basically is a reminder to self)

inverted-capital commented 1 year ago

I made this little tool that gives realtime graphs on what transitions are being taken that has helped a lot with runaway models screenshot

The code is here

That file is the little framework shortcut we have been using to specify tests with more safety, so they end up like this:

 test('simple solve packet', {
      toState: isCount(1, { type: 'PACKET', enacted: true }),
      filter: and(
        skipActors('funder', 'trader', 'editor', 'superQa'),
        skipAccountMgmt(),
        max(1, { type: 'HEADER' }),
        max(1, { type: 'SOLUTION' }),
        max(0, { type: 'DISPUTE' }),
        skipNavigation
      ),
      sut: {},
    })

It does things like ensure there is always at least one path in case we change the model and we silently generate no tests, plus a .only(...) extension which skips the generation of the paths when focusing a test, since that can really bog down testing with a few dozen model tests taking about a second each to generate 🤷

inverted-capital commented 1 year ago

the next little tool we'll make is a memory efficient path walker so we can do monte carlo runs on the model, since we have long since admitted we cannot do every path any more, and the dijkstra's algorithm currently used in @xstate/graph uses all my ram and still doesn't finish.