Redefining Fairness: A Multi-dimensional Perspective and Integrated Evaluation Framework
Published in European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD), 2025
As machine learning techniques continue to permeate a variety of application domains with significant societal impact, the focus on algorithmic fairness is becoming an increasingly critical aspect of this established area of research. Existing studies on fairness typically assume that algorithmic bias stems from a single, predefined sensitive attribute in the data, thereby overlooking the reality that multiple sensitive attributes are often prevalent simultaneously in the real world. Unlike previous works, this paper delves into the realm of group fairness involving multiple sensitive attributes, a setting that greatly increases the difficulty of mitigating algorithmic bias. We posit that this multi-attribute perspective provides a more pragmatic model for fairness in real-world applications, and show how learning with such an intricate precondition draws new insights that better explain algorithmic fairness. Furthermore, we develop the first-of-its-kind unified metric, Multi-Fairness Bonded Utility (MFBU), designed to simultaneously evaluate and compare the trade-offs between fairness and utility of multi-source bias mitigation methods. By combining fairness and utility into a single, intuitive metric, MFBU provides model designers the flexibility to holistically evaluate and compare different fairness techniques. Thorough extensive experiments conducted on three real-world datasets substantiate the superior performance of the proposed methodology in minimizing discrimination while maintaining predictive performance.