Artificially Generated Minorities (AGMs): The Veneer of Algorithmic Bias Correction

Jaja IR (2025)


Publication Type: Conference contribution

Publication year: 2025

Journal

Publisher: Springer Science and Business Media Deutschland GmbH

Book Volume: 2784 CCIS

Pages Range: 543-556

Conference Proceedings Title: Communications in Computer and Information Science

Event location: Cape Town, ZAF ZA

ISBN: 9783032117328

DOI: 10.1007/978-3-032-11733-5_34

Abstract

Algorithms often reinforce societal biases and stereotypes. This is especially concerning for minorities, who are disproportionately impacted by it, thereby threatening their further marginalization. Data fundamentalists frame this issue of algorithmic bias as stemming from data bias, indicated by the underrepresentation of some groups (minorities) in the datasets. Consequently, measures adopted to address algorithmic bias have been data-focused. A relatively recent data-focused measure adopted to address this issue is the deployment of what I term artificially generated minorities (AGMs)—synthetic data used to increase the representation of underrepresented groups (minorities) in algorithms’ training datasets. Data fundamentalists make two central claims about AGMs, which I term the representation claim, which holds that AGMs are representative of minorities, and the normative intervention claim, which holds that the deployment of AGMs addresses algorithmic bias. In this paper, I argue that AGMs do not meet these claims, particularly in the context of algorithmic recruitment. First, I demonstrate that AGMs do not capture the experience of historic and systemic oppression, which defines minority status. Hence, I contend that they do not meaningfully represent minorities. Second, I demonstrate that while AGMs facilitate the realization of the futuristic component of an adequate normative intervention, they undermine the reparative component. Thus, I contend that AGMs do not adequately address algorithmic bias. Finally, I briefly highlight that the failure of AGMs to meet these claims indicates that a data-focused framing of algorithmic bias is overly simplistic and does not account for all the complexities involved in the issue of algorithmic bias and its correction, particularly in the context of algorithmic recruitment.

How to cite

APA:

Jaja, I.R. (2025). Artificially Generated Minorities (AGMs): The Veneer of Algorithmic Bias Correction. In Aurona Gerber, Anban W. Pillay (Eds.), Communications in Computer and Information Science (pp. 543-556). Cape Town, ZAF, ZA: Springer Science and Business Media Deutschland GmbH.

MLA:

Jaja, Ibifuro Robert. "Artificially Generated Minorities (AGMs): The Veneer of Algorithmic Bias Correction." Proceedings of the 6th Southern African Conference for Artificial Intelligence Research, SACAIR 2025, Cape Town, ZAF Ed. Aurona Gerber, Anban W. Pillay, Springer Science and Business Media Deutschland GmbH, 2025. 543-556.

BibTeX: Download