Now that we have Travis in place I started writing some small tests, mostly to try and figure out the pitfalls of testing with real random data, and how to handle (for instance) non-converging training sessions.
Since the training can fail to converge, the test is "wrapped" in an assumeTrue() so as to be skipped if fail
I ran the tests on a loop, and tweak the training values so as to reduce the number of fails with an acceptable running time (it's failing ~1:100, which IMHO is acceptable, of course it can be better).
I added some tests for a couple demos since it would be nice for them to be tested, I'll try and clean up the demo code since it's what one usually uses as a reference
Tests actually yield some nice results, particularly with JavaDoc (I'll be updating it as soon I create the input validation for constructors).
I will be updating the interface documentation (NeuralNetwork, FuzzySet, etc), so as not to write tests that adapt to the code, but really test that the code does what it's supposed to do (for that JavaDoc needs to be a bit more polished)
Now that we have Travis in place I started writing some small tests, mostly to try and figure out the pitfalls of testing with real random data, and how to handle (for instance) non-converging training sessions.
assumeTrue()
so as to be skipped if failNeuralNetwork
,FuzzySet
, etc), so as not to write tests that adapt to the code, but really test that the code does what it's supposed to do (for that JavaDoc needs to be a bit more polished)