Content area
Full text
Since the twentieth century, the development of automation, manufacturing technology and digital communications has initiated a sustained period of rapid socio-economic transformation. Artificial intelligence (AI) and machine learning are now a fixture of our online interactions, our communications and our working lives. The human resources (HR) sector is experiencing parallel shifts towards complete digitalisation, with advancements in AI enhancing people analytics and improving methods of responding to the complexities of recruiting and managing the modern workforce. It is time HR had a serious look at these developments.
Artificial intelligence (AI) is, on the face of it, an intimidating and perplexing idea. We can accept that a robot can have remarkable physical power (far beyond that of a human), but can robots also have meaningful access to the symbolic realms of human activity? Can they work with language and interact with a knowledge-base in ways that emulate and even extend human cognitive capabilities? This form of the question captivated Alan Turing in the 1950s and prompted the so-called “Turing Test”: could a machine converse with you in such a way as to fool you into thinking it was a human?
These philosophical preoccupations have perhaps coloured the popular understanding of AI more than the reality. Progress in AI is not solely preoccupied with developing “computer consciousness”. Instead, it is more commonly being developed as a sophisticated form of design and engineering with tangible and functionally specific goals. Machine learning, once a sub-domain of AI and now a field in its own right, proposes that computers can be deployed in problem-solving contexts by making informed, general inferences from given data sets. If this is integrated into a feedback system, then strong, plausible inferences and correlations can be measured against performance.
Algorithmic analysis of data does not run without bias. Data sets are pre-structured in line with historical precedents and patterns; algorithms are already “interested” because they are orientated to specific human interests and problems. These “biases” can reflect the culture of an organisation and are hardwired into code. As every computer scientist knows, “garbage in, garbage out” reminds us of the danger of believing that algorithmic outputs are automatically meaningful and reliable. They are only as good as their inputs. The lesson here is that expertise is...





