Closed athewsey closed 2 weeks ago
Rebased to current main
. As I understood from the original review there wasn't actually a need for any change, but let me know if this is not the case!
Appreciate if y'all can help get this merged so we can have a more intuitive API 🙏
Issue #, if available: #269
Description of changes:
Extend
EvalAlgorithmInterface.evaluate()
interface to support specifying a list of multipledata_config
objects. evaluate() already returns a list of results by dataset, because when run with nodata_config
argument all applicable built-in datasets are analyzed. As mentioned in the attached issue, it was weird and confusing that users couldn't explicitly specify a set of more than one datasets to use.Testing done:
get_dataset_configs()
FactualKnowledge
) where multiple integration test datasets had already been defined.By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.