Bias and Fairness in Facial Recognition AI: An Experimental Evaluation of Demographic Disparities and Mitigation Strategies
DOI:
https://doi.org/10.61173/gq1fqz50Keywords:
Facial Recognition, Algorithmic Bias, Fairness in AI, Demographic Disparities, Ethical AIAbstract
This research study investigates the question of whether or not facial recognition artificial intelligence (AI) systems are fair in the various demographic groups. Although the use of facial recognition in the area of security, law enforcement, and business has been rapidly adopted, recent reports have demonstrated significant differences in performance, which has been criticized in terms of ethics and social issues. The research develops three hypotheses: (H1) facial recognition systems demonstrate much greater errors of dark-skinned people and women than of lightskinned men; (H2) working with balanced data sets eliminates demographic bias; and (H3) training on fairness enhances group performance equality. The large open-source face datasets that were used as primary data are the Racial Faces in the Wild (RFW), Balanced Faces in the Wild (BFW), CASIA-Face-Africa and KANFace. The open-source models that were tested and trained using these datasets include ResNet and VGGFace2. The evaluation of performance was done in terms of confusion matrices, fairness (Demographic Parity Difference, Equalized Odds) and statistical tests (T-tests, ANOVA, Chi-square). The findings indicate evident demographic differences in accordance with previous research but balanced datasets and unbiased approaches proved to contribute to quantifiable gains. Nonetheless, there were still tradeoffs between the general accuracy and fairness. The results confirm the opinion that demographic equity of AI needs not only technical interventions but also global governance, transparency of the datasets, and ethical control.