aws / fmeval

Foundation Model Evaluations Library
http://aws.github.io/fmeval
Apache License 2.0
151 stars 40 forks source link

[Feature] Streaming results/progress and summary metrics for faster feedback #278

Open athewsey opened 1 month ago

athewsey commented 1 month ago

I recently helped build an app for data-driven prompt engineering, in which users run fmeval-based evaluations for fast feedback to help refine prompt templates.

One challenge I noticed is that batch evaluation delays this feedback. Today, either users have to make a trade-off when sizing their input dataset - or application designers would have to implement some kind of chunking before fmeval - to trade off between the speed and quality of results.

To accelerate workflows like these, it would be useful if we could start receiving results ASAP while the batch job runs (including point-in-time summary metrics) so an app could display intermediate progress. That way, a prompt engineer could identify obviously-underperforming changes early and revert the change + abort the full evaluation.

A caveat to this though: I'm not aware of a common standard pattern yet for this kind of progress callback/hook in Python's synchronous-by-default ecosystem... Or how Ray might affect that.

Zhenshan-Jin commented 1 month ago

Thanks for you feedback! We will consider to add it to our roadmap and prioritize it.