BaderLab / saber

Saber is a deep-learning based tool for information extraction in the biomedical domain. Pull requests are welcome! Note: this is a work in progress. Many things are broken, and the codebase is not stable.
https://baderlab.github.io/saber/
MIT License
102 stars 17 forks source link

Easy way to evaluate a model #125

Open JohnGiorgi opened 5 years ago

JohnGiorgi commented 5 years ago

There is currently no easy way to evaluate a trained model. There should be some kind of interface for this, e.g.

from saber import Saber

sb = Saber()
sb.load('path/to/some/model')

sb.evaluate('/path/to/some/dataset/to/evaluate')

and / or

(saber) $ python -m saber.cli.test --pretrained_model path/to/pretrained/model --dataset_folder path/to/datasets/to/evaluate/on

Here is a hack that works for the time being and can serve as inspiration:

from saber.saber import Saber
from saber.metrics import Metrics
from saber import constants

constants.UNK = '<UNK>'
constants.PAD = '<PAD>'

sb = Saber()
sb.load('/home/john/dev/response/pretrained_models/CALBC_100K_blacklisted')
sb.load_dataset('/home/john/dev/response/datasets/train_on_BC4CHEMD_test_on_BC5CDR')

sb.config.criteria = 'right'

evaluation_data = sb.model.prepare_data_for_training()[0]

print(sb.datasets[-1].idx_to_tag)

metric = Metrics(sb.config, sb.model, evaluation_data, sb.datasets[-1].idx_to_tag, './', model_idx=0)

test_scores = metric._evaluate(evaluation_data, partition='test')
metric.print_performance_scores(test_scores, title='test')