Edited the logic in SummarizationAccuracyMetrics so that the target_output field wouldn't have to be duplicated in order to be evaluated for each model_output. If we have multiple target_outputs we take the max score from self.compute_metric.
Added an additional target_output_keys_provider parameter to support a way to compute scores given a key to a list of possible target outputs.
Updated the ordering of subclass transforms in order to keep everything consistent with their parent transforms, otherwise parameters would get mixed up. I had to add a default value for BertScore because it follows a parameter with a default value. input_keys + model_output_keys, TypeError: can only concatenate str (not "list") to str
SummarizationAccuracyMetrics
so that thetarget_output
field wouldn't have to be duplicated in order to be evaluated for eachmodel_output
. If we have multipletarget_outputs
we take the max score fromself.compute_metric
.target_output_keys_provider
parameter to support a way to compute scores given a key to a list of possible target outputs.BertScore
because it follows a parameter with a default value.input_keys + model_output_keys, TypeError: can only concatenate str (not "list") to str