A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
I have tried the following code for finding the statistical_parity_difference difference of a multiclass classification dataset
from aif360.datasets import StructuredDataset
from aif360.metrics import DatasetMetric
cols = ['one', 'sex', 'three','label']
labs = np.ones((4, 1))
d = {'one': [.1, .2,.3,.4],'sex': [0,1,1,0], 'three': [.5,.8,.9,1], 'label': [.1,.2,.3,.1]}
df = pd.DataFrame(data=d)
sd = StructuredDataset(df=df, label_names=['label'],protected_attribute_names=['sex'])
privileged_groups = [{'sex': 1}]
unprivileged_groups = [{'sex': 0}]
metric_orig_train = DatasetMetric(sd,unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
print("Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_orig_train.mean_difference())
It shows the error like
AttributeError: 'DatasetMetric' object has no attribute 'mean_difference'
I know itis due to DatasetMetric class has no method named mean_difference. Then what is the other way to see the statistical_parity_difference in multiclass classification using AIF360?
I have tried the following code for finding the statistical_parity_difference difference of a multiclass classification dataset
It shows the error like
I know itis due to DatasetMetric class has no method named mean_difference. Then what is the other way to see the statistical_parity_difference in multiclass classification using AIF360?