Open jonnor opened 9 years ago
Current thinking is to drive tests using FBP protocol, in a fashion suitable for any FBP runtime. This will require the test fixture expressed as a graph, and test input/output data expressed as a set of IPs. Input and reference images can be set up and stored as dataURLs (or HTTP).
It is desirable that Flowhub can visualize actual versus expected values, to aid debugging. This could be done by dedicated/specialized edge inspectors which can handle two values and "diffing". Inputting and changing data values for the test should be able to reuse data selectors used for IIPs in the graph.
Some prototyping in MicroFlo https://twitter.com/jononor/status/587565844768624640
https://github.com/flowbased/fbp-spec is the project which should enable this
I think it is realistic to move to fbp-spec for our graphtests now. Most important is that we keep the easy workflow of being able to add a test by just giving name, url/params and drop in an expected output image.
For this we will need to wrap the current testing utilities into NoFlo components. Suggested components:
_url: /graph/fooo
or params: { graph: foo, input: foo, param1: bar }
Right now we do couple of intermediate validations, like check if we have expected data for something on disk. Currently in Mocha these are formulated testcases, but they are more like independent assertions. For this, use exported ports from middle of the fixture graph?
At this point the fixture graph may be getting so complicated that some way of introspecting it may be highly desired. Ideally this would be visual in Flowhub.
While the separate components/graph from two comments up would be the ideal, an intermediate step could be to have one fat component for the whole thing - then split up later as needed.
Another potential benefit is to generate documentation directly from the testcases, ref https://github.com/noflo/noflo-ui/issues/51
Many of our testcases are data-driven end-to-end reference tests, see
spec/graphtests.yml
Each case specifies:Actual output is verified against reference output using
gegl-imgcmp
with a certain tolerance. The tool also provides a diff when there are differences, which can be useful for debugging failures, and ruling out false-positives.We should also be able to test error conditions and performance in similar fashion, as these can also be set up and observable from outside (black-box).
Adding new cases, and modifying existing ones should be possible to do entirely from within Flowhub.