# Metrics¶

skfair.metrics.equal_opportunity_score(sensitive_column, positive_target=1)[source]

The equality opportunity score calculates the ratio between the probability of a true positive outcome given the sensitive attribute (column) being true and the same probability given the sensitive attribute being false.

$\min \left(\frac{P(\hat{y}=1 | z=1, y=1)}{P(\hat{y}=1 | z=0, y=1)}, \frac{P(\hat{y}=1 | z=0, y=1)}{P(\hat{y}=1 | z=1, y=1)}\right)$

This is especially useful to use in situations where “fairness” is a theme.

Source: - M. Hardt, E. Price and N. Srebro (2016), Equality of Opportunity in Supervised Learning

Parameters
• sensitive_column – Name of the column containing the binary sensitive attribute (when X is a dataframe) or the index of the column (when X is a numpy array).

• positive_target – The name of the class which is associated with a positive outcome

Returns

a function (clf, X, y_true) -> float that calculates the equal opportunity score for z = column

skfair.metrics.p_percent_score(sensitive_column, positive_target=1)[source]

The p_percent score calculates the ratio between the probability of a positive outcome given the sensitive attribute (column) being true and the same probability given the sensitive attribute being false.

$\min \left(\frac{P(\hat{y}=1 | z=1)}{P(\hat{y}=1 | z=0)}, \frac{P(\hat{y}=1 | z=0)}{P(\hat{y}=1 | z=1)}\right)$

This is especially useful to use in situations where “fairness” is a theme.

source: - M. Zafar et al. (2017), Fairness Constraints: Mechanisms for Fair Classification

Parameters
• sensitive_column – Name of the column containing the binary sensitive attribute (when X is a dataframe) or the index of the column (when X is a numpy array).

• positive_target – The name of the class which is associated with a positive outcome

Returns

a function (clf, X, y_true) -> float that calculates the p percent score for z = column