Write elm code that describes the functionality you want to test
Run morphir to convert it into Spark code
Run a test that imports that code, creates testing data by hand, runs the generated code with it, and manually verifies the result is what we expect.
It would be much better if the elm output could be compared against the Spark output for all of them - this means we can test against larger datasets with lower effort, and don't have to mentally work out every distinct possibility and check it behaves as we think it should.
Actions
[ ] Check how much effort it would be to generate datasets for input/output types other than Antiques (Antiques seems to use tests-integration/spark/elm-tests/src/AntiqueCsvEncoder.elm, tests-integration/spark/elm-tests/src/AntiqueCsvDecoder.elm, and tests-integration/spark/elm-tests/tests/GenerateAntiqueTestData.elm.
[ ] Write tests in tests-integration/spark/elm-tests/tests/ that run every example in tests-integration/spark/model/src/SparkTests/*.elm (except AntiqueRulesTests.elm)
[ ] Replace all the test cases in tests-integration/spark/test/src/*.scala (except AntiqueRulesTests.scala) with ones that compare against the output of running elm tests.
Context
Our "old" tests involve:
It would be much better if the elm output could be compared against the Spark output for all of them - this means we can test against larger datasets with lower effort, and don't have to mentally work out every distinct possibility and check it behaves as we think it should.
Actions
tests-integration/spark/elm-tests/src/AntiqueCsvEncoder.elm
,tests-integration/spark/elm-tests/src/AntiqueCsvDecoder.elm
, andtests-integration/spark/elm-tests/tests/GenerateAntiqueTestData.elm
.tests-integration/spark/elm-tests/tests/
that run every example intests-integration/spark/model/src/SparkTests/*.elm
(except AntiqueRulesTests.elm)tests-integration/spark/test/src/*.scala
(except AntiqueRulesTests.scala) with ones that compare against the output of running elm tests.