SuffolkLITLab / ALKiln

Integrated automated end-to-end testing with docassemble, puppeteer, and cucumber.
https://assemblyline.suffolklitlab.org/docs/alkiln/intro
MIT License
14 stars 4 forks source link

Discuss: can we do more than just regression tests from manual interview runs? #175

Closed plocket closed 1 month ago

plocket commented 3 years ago

Right now our tests are basically only good at regression testing. Can we do more? If we require knowing all the variables and values for every test, then I think not, so maybe that's not possible. Do developers have other testing goals?

Other possible testing priorities:

Random input

One idea is setting up situations where this testing framework could put in random values. This sounds like a dangerous feature right now and I'm pretty uncertain about it, but I'd like to pursue the discussion.

This code wouldn't be able to handle things like special validation for dates and stuff. That is, if a field or page had more complex needs, developers may need to put in those complex variables. The simple stuff, on the other hand, would be able go on its own if needed.

It would definitely need to be opt-in - a developer would need to understand how to handle this behavior and what they could expect from it. Maybe it would be an env var they'd have to set each time they ran the tests manually.

Why: When a developer is in the midst of developing or updating an interview, things would be changing a lot and tests would either constantly be failing or need constant updating. That makes sense in a lot of contexts. Maybe that's the way to go here too. We might also consider that this might not be what the developer needs to know at that time of development, though. They might just need to know that the interview can run through at least one path without error. They may be adding clarifying pages and questions to the interview as opposed to ones that substantially change the output.

Maybe they'd have one happy-path test that was most likely to run without error and specifically use random input to help that pass as development is going on.

I think this would also make another report necessary - what variables were set to what values - so that the developer could see which input caused the success or failure.

Other thoughts

It sounds a little nuts. Maybe the real problem is that it's hard to build these tests manually, which is a shame. I don't think the test generator is completely enough because it doesn't really allow the developer to create variations on tests. You always have to have gone through the interview down that path at least once, which means you basically tested it. Maybe implementing Backgrounds that help developers create variations on tests would help. Not sure. Also not sure how that would show up in reports and errors.

plocket commented 3 years ago

See #22 as well

plocket commented 3 years ago

List of ideas so far:

Will edit to add to those as we go. I know, these should really go in issues, but I think it's a broader topic