Closed JeroenSoeters closed 1 year ago
i'm not convinced we need to include the expected output in the script. what i would expect is a framework that allows us to:
given how broadly-used typescript is, i'm a bit surprised there isn't a framework already for this..
I should probably have been more clear in that what I am proposing is exactly that :) It's just a matter of where we want the expectations to live. Here I'm inlining those expectations with the example scripts, which has the advantage that you only have to edit a single file which is your example and your test. We could automatically run the framework against every file in the examples folder. Also, if you're just browsing the repo, you don't necessarily have to run the examples to see what output they produce.
OTOH it might be non-obvious that this is happening, and one could argue that the inlined output distracts from the actual example.
I don't really have a strong preference for one or the other.
95% sure this is the crate I've used in the past, that may be helpful: https://crates.io/crates/assert-json-diff
I think it would be good to define what we actually want in terms of test coverage:
For (1), it would be better to have a regular test suite, one test per endpoint. For (2), we could consider above mentioned approach with comments in the scripts.
I'm mostly interested in 1 to catch regressions. 2 is fine but if we have 1 it should be low priority.
I'm also worried that if we have 2 someone will suggest that all our tests should be written in typescript and that will be a whole Thing.
In that case, should we write a set of end-to-end tests against the aurae-client
? I feel that would cover most of the concerns. The "deno ops" (or whatever these things are called) are anyways exercising the aurae-client
and anything up to that point is auto-generated anyways so should be pretty stable.
It would also be a more natural place for these "sort-of-end-to-end-tests" I have been writing and sticking in the cells
and observe
gRPC services code up until now.
And it would help us drive this sort of stuff from a test: https://github.com/aurae-runtime/aurae/pull/386
I have started to take a stab at this as part of this PR, as it is becoming increasingly difficult to test scenarios that involve a couple of different APIs. If this looks good, I will move that single test from the cell_service (test_list
) over to this crate as well and introduce some helpers/builders to make these tests easier to write and read. Also, we need to start/stop auraed
in the background for this test.
Since we have an established pattern now for these integration tests I'm going to close this issue.
It would be nice if we had a test harness around the auraescript examples we ship. What if we would add comments with the expected output in the example TypeScript files and we'd write a test runner that would parse all those comments, builds an array of expected JSON output and compares that against the actual output of the script. Like this:
With
"key" : " value"
we mean assert that the key exists in the output and assert that the values are equal. With"key" : "<value>"
we indicate that we expect the key to be present and has any value. We could create a...
pattern for streams when we support them: