evalem.pipelines module is added. Now a new type/component evalem.pipelines._base.Pipeline is introduced.
evalem.pipelines.defaults.SimpleEvaluationPipeline is implemented which takes in a single model, input and list of evaluators to run the model through the pipeline. We can invoke .run(..) or the object itself is callable.
Minor Changes
tests/ test suite is refactored to make use of conftest.py format configuration for pytest
Usage
from evalem.pipelines import SimpleEvaluationPipeline
from evalem.models import TextClassificationHFPipelineWrapper
from evalem.evaluators import TextClassificationEvaluator
# can switch to any implemented wrapper
model = TextClassificationHFPipelineWrapper()
# can switch to other evaluator implementation
evaluator = TextClassificationEvaluator()
# initialize
eval_pipe = SimpleEvaluationPipeline(model=model, evaluators=evaluator)
results = pipe(inputs, references)
# or
results = pipe.run(inputs, references)
Note: The pipeline is stateless. In future we can have another implementation that can take in the mappings of input datasets to corresponding models and produce some results for all the metrics.
Major Changes
evalem.pipelines
module is added. Now a new type/componentevalem.pipelines._base.Pipeline
is introduced.evalem.pipelines.defaults.SimpleEvaluationPipeline
is implemented which takes in a single model, input and list of evaluators to run the model through the pipeline. We can invoke.run(..)
or the object itself is callable.Minor Changes
tests/
test suite is refactored to make use ofconftest.py
format configuration for pytestUsage