openml-labs / gama

An automated machine learning tool aimed to facilitate AutoML research.
https://openml-labs.github.io/gama/master/
Apache License 2.0
92 stars 30 forks source link

Increase code unit test coverage #10

Open PGijsbers opened 5 years ago

PGijsbers commented 5 years ago

Not all code is currently covered by unit and/or system tests. In some cases, this does not matter (e.g. not all ValueError scenarios need to be automatically checked, I think), but for other functionality still needs coverage (e.g. time-out behavior in evaluation.py).

PGijsbers commented 5 years ago

In particular coverage from unit tests need to expand. With e.g. mocking ensembling and search algorithms can also be included in quick tests. Missing coverage:

PGijsbers commented 5 years ago

It would also be desirable to create smaller test modules (e.g. separate tests for arff input or str-labels)

PGijsbers commented 5 years ago

Initial effort was merged in #50. I think the gama/genetic_programming/compilers/scikitlearn.py code is reported as not covered due to it being executed in a separate process only (in system tests).

PGijsbers commented 4 years ago

More progress in #84. Coverage of subprocesses is definitely correctly reported. Despite reaching 90% coverage, how to properly test AutoML systems reliably and quickly is still not obvious to me.

PGijsbers commented 1 year ago

How to adequately test AutoML systems remains an open issue, especially how to do this well with unit tests as opposed to simply running a (small) benchmark. Updating the title to reflect that more general code coverage is (currently) not a goal, but rather to test for more potential issues in the unit test suite.