A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
The Learning Fair Representations class (LFR) parameters are
LFR(unprivileged_groups, privileged_groups, k=5, Ax=0.01, Ay=1.0, Az=50.0, print_interval=250, verbose=0, seed=None). The only required parameters are unprivileged_groups and privileged_groups, so I have used LFR = preprocessing.LFR(unprivileged_groups, privileged_groups) to initialize the LFR.
However, when I run the LFR fit_transform and then retrain and test the model (sklearn's Random Forest) on the updated dataset, It now only predicts positive outcomes for every single datapoint. I have played around with the initialization of LFR, but no changes to that have made a difference.
Do you have any suggestions on how to fix the LFR so that it doesn't predict only positive outcomes?
The Learning Fair Representations class (LFR) parameters are
LFR(unprivileged_groups, privileged_groups, k=5, Ax=0.01, Ay=1.0, Az=50.0, print_interval=250, verbose=0, seed=None)
. The only required parameters are unprivileged_groups and privileged_groups, so I have usedLFR = preprocessing.LFR(unprivileged_groups, privileged_groups)
to initialize the LFR.However, when I run the LFR fit_transform and then retrain and test the model (sklearn's Random Forest) on the updated dataset, It now only predicts positive outcomes for every single datapoint. I have played around with the initialization of LFR, but no changes to that have made a difference.
Do you have any suggestions on how to fix the LFR so that it doesn't predict only positive outcomes?