Content area
Full Text
________
Artificial Intelligence (AI) has recently catapulted into the public consciousness with the advent of large language models, such as ChatGPT, that are trained on vast amounts of data to understand text and generate original content. The spotlight on the incredible capabilities of ChatGPT and other AI systems, and the challenges that they are already posing in workplaces and classrooms, is raising interest in how to govern and regulate AI technology. In late July, leading AI developers convened at the White House and made voluntary commitments to make AI systems safe and secure. This attention is well placed, if belated, as AI has already shown that it can create misinformation and perpetuate human bias.1
As these present-day risks are coming into focus, top AI experts, including Geoffrey Hinton and Yoshua Bengio (winners, with Yann LeCun, of the 2018 Turing Award, often described as the Nobel Prize of computing), are sounding the alarm about the extreme, even existential, risks that future AI systems may bring. They worry especially that humans could lose control to runaway AI systems sometime in the next five to twenty years.2
This is not the first time that humanity has needed to simultaneously address serious current and future risks posed by a single technology. Other innovations—in energy, medicine, and agriculture—have presented similar challenges. We knew decades ago, for example, that burning fossil fuels caused air pollution that was harming human health and well-being in real time while also contributing to global warming, which was on track to cause massive global damage down the line. The appropriate policy response would have been to address both the current and future threats. To the degree that governments took meaningful action to curtail certain harms from pollution, however, global warming has been allowed to proceed all but unimpeded. We cannot afford to repeat this mistake. It is imperative that we address both the challenges that AI systems are presenting today and the damage that they could inflict in the coming years. Here, I focus on the latter—in particular, the threat of runaway AI.
My core argument has four premises. First, we may, in the not-distant future, develop advanced AI systems that far exceed human capabilities in many key domains, including persuasion and manipulation; military...