Description
This research project was inspired by the apparent challenges of integrating the social and legal concept of fairness within the technical domain of Machine Learning. Industries including the criminal justice system, healthcare, and finance have eagerly utilised the power of automated decision-making tools to support high-stake decision-making tasks. Hence, now more than ever, a pressing need has emerged to ensure that these tools are assessed under strict fairness criteria. These criteria are expressed as fairness metrics. They quantify the notion of fairness and strive to mitigate unwanted biases within a system to deter the possibility of outcomes that prompt or inflate discriminatory treatment towards historically marginalised groups. Thus, the research aimed to develop a fairness metric that provides a unified and comprehensive bias assessment. The objectives included using an axiomatic approach to scientifically assess what an ideal metric should measure and do. Then, to extract core components from existing metrics and adapt them to design a simple mathematical formulation for the new metric. Finally, to design an experimental setup that tested the metric’s behaviour, compared to existing metrics. The result was the formulation of the Unity Measure. A metric that unifies the framework by incorporating the individual component of the benefit function from the Generalised-Entropy Index metric, combined with group weighting. The experimental results show that this metric is more sensitive than existing metrics in detecting skews in dataset distributions, and its score is interpretable and insightful. Therefore, this novel metric is the key to unify the fairness metrics landscape.