cscore is a minimal-footprint library providing commonly used helpers & patterns for your C# projects. It can be used in both pure C# and Unity projects.
Recording the state changes of the application is much easier to achieve on the redux store level by just storing the json of each dispatched Action in a Recorder middleware and this way being able to replay all actions later.
Any change that the user does via the UI that changes the state of the application will be captured with this approach as well and replaying the actions will allow to test and validate the effects of the state changes on the UI
The initial idea:
A recorder component for the Unity UI system that:
Records UI input from buttons (and input fields?). or even more high level just the clicks and the keyboard input?
Takes screenshots for visual regression (after each click?). Could use the visual diff system to detect when the UI finished loading and only afterwards takes the visual regression screenshot. The screenshot logic for visual regression tests is already in cscore and should be used for this
To be able to use the recoding as a visual unit test the recording should start when the scene is started and the recorder is attached to the scene. A recording file should contain normal json, so that it can be versioned with the rest of the code
Is it possible to auto update the recording by replaying it? E.g if the recording contains "click on button at (x,y)" and the button moved in the latest UI so that the button center is not (x,y) anymore but (x+10,y) the recording could update itself automatically to adapt to such small UI changes
What are the typical UI actions on a "click, type, drag, .." level that the recorder should be able to record? Does any research exist to compare low level UI actions like "drag mouse from x1,y1 to x2,y2" vs more UI focused actions like "drag button from x1,y1 to x2,y2"? Low level sounds more reusable, what are the drawbacks?
The new idea:
The initial idea:
A recorder component for the Unity UI system that:
click on button at (x,y)
" and the button moved in the latest UI so that the button center is not(x,y)
anymore but(x+10,y)
the recording could update itself automatically to adapt to such small UI changesdrag mouse from x1,y1 to x2,y2
" vs more UI focused actions like "drag button from x1,y1 to x2,y2
"? Low level sounds more reusable, what are the drawbacks?