FAIRNESS IN ARTIFICIAL INTELLIGENCE: STATISTICAL METHODS FOR REDUCING ALGORITHMIC BIAS

Authors

  • Muhammad Ahmad Author
  • Idrees Ahmad Khan Author
  • Roidar Khan Author
  • Muhammad Saleem Author
  • Ijaz Ullah Author

Keywords:

Artificial Intelligence, Algorithmic Fairness, Bias Mitigation, Demographic Parity, Equal Opportunity, Disparate Impact

Abstract

Ensuring fairness in Artificial Intelligence (AI) has become a critical challenge, particularly in high-stakes decision-making domains such as finance and employment. This study investigates statistical methods for reducing algorithmic bias using two benchmark datasets: Adult Income (sex as the sensitive attribute) and German Credit (age as the sensitive attribute). Baseline models, including Logistic Regression (Acc = 0.85, AUC = 0.90) and Gradient Boosted Trees (Acc = 0.87, AUC = 0.92), achieved strong predictive performance but exhibited notable fairness disparities, with demographic parity (DP) differences exceeding 0.17 and disparate impact (DI) ratios falling below the acceptable 0.8 threshold. Fairness interventions, including reweighing, fairness-regularized learning, and equalized odds post-processing, significantly improved fairness metrics. For instance, post-processing reduced both DP and EO differences to 0.06 while maintaining AUC at 0.90, and Fair-Regularized LR improved DI to 0.86 without loss in accuracy. In the German Credit dataset, reweighing and post-processing reduced DP differences by nearly half (0.12 → 0.06) and improved DI to above 0.85, though with slight declines in accuracy (0.74 → 0.72). Group-level analyses further revealed structural inequities: males had higher true positive rates (TPR = 0.862) but also much higher false positive rates (FPR = 0.530), while females had greater accuracy (0.847) yet very low TPR (0.304), reflecting systematic exclusion from positive outcomes. These findings underscore the importance of fairness-aware modeling, demonstrating that algorithmic bias can be mitigated without substantial sacrifices in predictive performance

Downloads

Published

22-08-2025

How to Cite

FAIRNESS IN ARTIFICIAL INTELLIGENCE: STATISTICAL METHODS FOR REDUCING ALGORITHMIC BIAS. (2025). Journal of Media Horizons, 6(3), 2206-2214. https://jmhorizons.com/index.php/journal/article/view/539