Closed jaared closed 1 year ago
Suggestion for documentation. We should inform the user here that we return the fairness parity metric or 1/metric as they are equivalent. https://github.com/EqualityAI/EqualityML/blob/90e04435007a653b5c4c69dc5a9b86e0c5d34ce7/equalityml/fair.py#L414
Suggested text: "Returns the fairness metric score for the input fairness metric name. Note that in cases where the fairness metric is > 1 we return 1/fairness metric score to allow for easy comparison. "
Suggested text added. Very good comment. Thank you.
Suggestion for documentation. We should inform the user here that we return the fairness parity metric or 1/metric as they are equivalent. https://github.com/EqualityAI/EqualityML/blob/90e04435007a653b5c4c69dc5a9b86e0c5d34ce7/equalityml/fair.py#L414
Suggested text: "Returns the fairness metric score for the input fairness metric name. Note that in cases where the fairness metric is > 1 we return 1/fairness metric score to allow for easy comparison. "