Content area
Full text
Algorithms have been created in a way that both reflects and perpetuates biases, particularly those regarding race and gender. In the past, algorithms have been perceived as neutral; algorithms come from technology, are coded, and are ‘logical,’ which has led to a disavowal of the fact that they can be biased. In fact, the very human origins of machine learning and artificial intelligence make it just as fallible as human cognitive fallacies. As these algorithms become more ubiquitous, creating more neutral machine learning is more important than ever.
Machines “learn” from datasets that have been given to them as a sample pool. Because the data are collected by humans, they reflect human biases in collection and sampling. The machines learn biases as they learn the data, and their findings reflect the skewed sampling pool that they are given. The Face ID system in Apple iPhones had difficulty distinguishing between Chinese faces when distributed globally, leading to people being able to unlock each others phones. Similarly, Google Photos AI accidentally labelled black people as gorillas because gorillas were the only comparable, dark-skinned beings in the data-recognition AI training set. These biases reflect the lack of diversity in the technology industry, which results in oversights during coding and design processes.
Although it is alarming that algorithms see the world through...




