NASA-IMPACT / evalem

An evaluation framework for your large model pipelines
0 stars 0 forks source link

[alpha] Addition of simple pipeline abstraction #11

Closed NISH1001 closed 1 year ago

NISH1001 commented 1 year ago

Major Changes

Minor Changes

Usage

from evalem.pipelines import SimpleEvaluationPipeline
from evalem.models import TextClassificationHFPipelineWrapper
from evalem.evaluators import TextClassificationEvaluator

# can switch to any implemented wrapper
model = TextClassificationHFPipelineWrapper()

# can switch to other evaluator implementation
evaluator = TextClassificationEvaluator()

# initialize
eval_pipe = SimpleEvaluationPipeline(model=model, evaluators=evaluator)

results = pipe(inputs, references)

# or
results = pipe.run(inputs, references)

Note: The pipeline is stateless. In future we can have another implementation that can take in the mappings of input datasets to corresponding models and produce some results for all the metrics.