Content area

Abstract

While Artificial Intelligence (AI) and computer algorithms have become increasingly embedded in everyday life, concerns over biases in these systems have also been rising. Although much attention has been devoted to data-centric approaches that see the source of bias in the training data fed to these systems, this paper focuses on the second source of bias: biased programmers. This stance defends that programmers might unintentionally and unconsciously embed their worldviews into their codes. Drawing on an ontology of “bias in automated decision making” that distinguishes between first- and second-level discrimination and arbitrariness, we propose a novel twofold transparency concept to address second-level arbitrariness. To this goal, we transpose and adapt methodological tools from the social sciences: reflexivity and positionality statements. First, we advocate for the adoption of Algorithm Designers’ Reflexivity Statements (ADRSs), namely confidential internal written reflections that encourage programmers to critically examine and articulate their assumptions and potential biases. Second, we propose synthesising these reflections into an internal ADRSs Report and then into a public AI Positionality Statement (AIPS), which communicates to end users the residual and inherited biases that may skew algorithmic outputs. This dual approach not only enhances internal bias awareness but also equips AI users with a contextual framework to interpret algorithmic decisions, thereby promoting fairness and increasing trust in AI systems.

Full text

Turn on search term navigation

© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.