Currently, the repository is not tested in any way. We need some basic tests to ensure the functionality of the tool if changes are made. Ideally, it should automatically be detected if something goes wrong.
Implement a test for basic sanity checking: Implement a simple model to be trained on a trivial prediction tasks, check if (a) chebai runs at all and (b) the model is able to learn the prediction task successfully
Implement a test that ensures that the data splits are performed correctly. I.e., that every entity only appears in one of the train, validation and test datasets.
Currently, the repository is not tested in any way. We need some basic tests to ensure the functionality of the tool if changes are made. Ideally, it should automatically be detected if something goes wrong.
Tasks