@egil: This looks good, the only concern I have is that we are changing the model training stuff without having any regression tests in place. We really need to have some basic tests that just run the trained models against the test data and verify that it against our known results. We'll occasionally break that test as we change things, but we want to know when our output is changing .
Anyway, I don't think that needs to stop merging this since I think we should target those tests to the 1.1 tag. So here's my plan;
wait till #56 is checked in
checkout 1.1
run what are now the active models against some or all of the test data and store the output.
checkout master
branch and create regression tests that compare the model output to stored output. This way
if we inadvertently change the model results we'll know.
Adding this text to issue #32 and assigning that to myself
@egil: This looks good, the only concern I have is that we are changing the model training stuff without having any regression tests in place. We really need to have some basic tests that just run the trained models against the test data and verify that it against our known results. We'll occasionally break that test as we change things, but we want to know when our output is changing .
Anyway, I don't think that needs to stop merging this since I think we should target those tests to the 1.1 tag. So here's my plan;
Adding this text to issue #32 and assigning that to myself