Open JohnGiorgi opened 5 years ago
There is currently no easy way to evaluate a trained model. There should be some kind of interface for this, e.g.
from saber import Saber sb = Saber() sb.load('path/to/some/model') sb.evaluate('/path/to/some/dataset/to/evaluate')
and / or
(saber) $ python -m saber.cli.test --pretrained_model path/to/pretrained/model --dataset_folder path/to/datasets/to/evaluate/on
Here is a hack that works for the time being and can serve as inspiration:
from saber.saber import Saber from saber.metrics import Metrics from saber import constants constants.UNK = '<UNK>' constants.PAD = '<PAD>' sb = Saber() sb.load('/home/john/dev/response/pretrained_models/CALBC_100K_blacklisted') sb.load_dataset('/home/john/dev/response/datasets/train_on_BC4CHEMD_test_on_BC5CDR') sb.config.criteria = 'right' evaluation_data = sb.model.prepare_data_for_training()[0] print(sb.datasets[-1].idx_to_tag) metric = Metrics(sb.config, sb.model, evaluation_data, sb.datasets[-1].idx_to_tag, './', model_idx=0) test_scores = metric._evaluate(evaluation_data, partition='test') metric.print_performance_scores(test_scores, title='test')
There is currently no easy way to evaluate a trained model. There should be some kind of interface for this, e.g.
and / or
Here is a hack that works for the time being and can serve as inspiration: