Content area
After 50 years, the fields of artificial intelligence and robotics capture the imagination of the general public while, at the same time, engendering a great deal of fear and skepticism. Isaac Asimov recognized this deep-seated misconception of technology and created the Three Laws of Robotics. The first part of this paper examines the underlying fear of intelligent robots, revisits Asimov's response, and reports on some current opinions on the use of the Three Laws by practitioners. Finally, an argument against robotic rebellion is made along with a call for personal responsibility and suggestions for implementing safety constraints in intelligent robots. [PUBLICATION ABSTRACT]
Ethics and Information Technology (2007) 9:153164 Springer 2007 DOI 10.1007/s10676-007-9138-2
AI Armageddon and the Three Laws of Robotics
Lee McCauley
Department of Computer Science, University of Memphis, 374 Dunn Hall, Memphis, TN 38152, USA
E-mail: [email protected]
Abstract. After 50 years, the elds of articial intelligence and robotics capture the imagination of the general public while, at the same time, engendering a great deal of fear and skepticism. Isaac Asimov recognized this deep-seated misconception of technology and created the Three Laws of Robotics. The rst part of this paper examines the underlying fear of intelligent robots, revisits Asimovs response, and reports on some current opinions on the use of the Three Laws by practitioners. Finally, an argument against robotic rebellion is made along with a call for personal responsibility and suggestions for implementing safety constraints in intelligent robots.
Key words: articial intelligence, robot, three laws, Asimov, Roboticists oath, Frankenstein Complex
Introduction
In the late 1940s a young author by the name of Isaac Asimov began writing a series of stories and novels about robots. That young man would go on to become one of the most prolic writers of all time and one of the corner stones of the science ction genre. As the modern idea of a computer was still being rened, this imaginative boy of 19 looked deep into the future and saw bright possibilities; he envisioned a day when humanity would be served by a host of humanoid robots. But he knew that fear would be the greatest barrier to success and, consequently, implanted all of his ctional robots with the Three Laws of Robotics. Above all, these laws served to protect humans from almost any perceivable danger. Asimov believed that humans would put safeguards into any potentially dangerous tool and saw robots as just advanced tools.
Throughout his life Asimov believed that his Three Laws were more than just a literary device; he felt scientists and engineers involved in robotics and Articial Intelligence (AI) researchers had taken his Laws to heart (Asimov 1990). If he was not misled before his death in 1992, then attitudes have changed since then. Even though knowledge of the Three Laws of Robotics seems universal among AI researchers, there is the pervasive attitude that the Laws are not implementable in any meaningful sense.
With the eld of Articial Intelligence now 50 years old and the extensive use of AI products (Cohn 2006), it is time to re-examine Asimovs Three Laws from foundations to implementation. In the process, we must address the underlying fear of uncontrollable AI.
The Frankenstein Complex
In 1920 a Czech author by the name of Karel Capek wrote the widely popular play R.U.R., which stands for Rossums Universal Robots. The word robot which he or, possibly, his brother, Josef, coined comes from the Czech word robota meaning drudgery or servitude (Jerz 2002). As typies much of science ction since that time, the story is about articially created workers that ultimately rise up to overthrow their human creators. Even though Capeks Robots were made out of biological material, they had many of the traits associated with the mechanical robots of today. Human shape that is, nonetheless, devoid of some human elements, most notably, for the sake of the story, reproduction.
Even before Capeks use of the term robot, however, the notion that science could produce something that it could not control had been explored most acutely by Mary Shelly under the guise of Frankensteins monster (Shelley 1818). The full title
154
LEE MCCAULEY
of Shelleys novel is Frankenstein, or The Modern Prometheus. In Greek mythology Prometheus brought re (technology) to humanity and, consequently, was soundly punished by Zeus. In medieval times, the story of Rabbi Judah Loew told of how he created a man from the clay (in Hebrew, a golem)of the Vltava river in Prague and brought it to life by putting a shem (a tablet with a Hebrew inscription) in its mouth. The golem eventually went awry, and Rabbi Loew had to destroy it by removing the shem.
What has been brought to life here, so to speak, is the almost religious notion that there are some things that only God should know. While there may be examples of other abilities that should remain solely as Gods bailiwick, it is the giving of Life that seems to be the most sacred of Gods abilities. But Life, in these contexts, is deeper than merely animation; it is the imparting of a soul. For centuries, scientists and laymen alike have looked to distinct abilities of humans as evidence of our uniqueness of our superiority over other animals. Perhaps instinctively, this search has centered almost exclusively on cognitive capacities. Communication, tool use, tool formation, and social constructs have all, at one time or another, been pointed to as dening characteristics of what makes humans special. Consequently, many have used this same argument to delineate humans as the only creatures that poses a soul. To meddle in this area is to meddle in Gods domain. This fear of man broaching, through technology, into Gods realm and being unable to control his own creations is referred to as the Frankenstein Complex by Isaac Asimov in a number of his essays (most notably Asimov 1978).
The Frankenstein Complex is alive and well. Hollywood seems to have re-kindled the love/hate relationship with robots through a long string of productions that each rely on this fear as a central element. To make the point, here is a partial list: Terminator (all three); I, Robot; A.I.: Articial Intelligence; 2010: a Space Odyssey; Cherry 2000; D.A.R.Y.L; Blade Runner; Short Circuit; Electric Dreams; the Battlestar Galactica series; Robocop; Metropolis; Runaway; Screamers; The Stepford Wives; and Westworld. This, of course, leaves out the numerous news stories, documentaries, and made-for-TV movies that air regularly. Even though several of these come from Sci-Fi literature, the fact remains that the predominant theme chosen when robots are on the big or small screen involves their attempt to harm people or even all of humanity. This is not intended as a critique of Hollywood. To the contrary; where robots are concerned, the images that people can most readily identify with, those that capture their imaginations and tap into their deepest fears,
involve the supplanting of humanity by its metallic ospring. In typical Hollywood style, they are using a pre-existing condition for entertainment purposes. Unfortunately, in doing so, they also reinforce this fear.
Even well respected individuals in both academia and industry have expressed their belief that humans will engineer a new species of intelligent machines that will replace us. Ray Kurzweil (1999, 2005), Kevin Warwick (2002), and Hans Moravec (1998) have all weighed in on this side. Bill Joy, co-founder of Sun Microsystems, expressed in a 2000 Wired Magazine article (Joy 2000) his fear that articial intelligence would soon overtake humanity and would, inevitably, take control of the planet for one purpose of another. The strongest point in their arguments hinges on the assumption that the machines will become too complicated for humans to build using standard means and will, therefore, relinquish the design and manufacture of future robots to intelligent machines themselves. Joy argues that robotics, genetic engineering, and nanotechnology pose a unique kind of threat that the world has never before faced, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. A bomb is blown up only once but one bot can become many, and quickly get out of control. Clearly, Joy is expressing the underpinnings of why the public at large continues to be gripped by the Frankenstein Complex.
No robotic apocalypse is coming
Is the extinction of humanity at the hands of autonomous robots a real possibility? The truth is that the likelihood of a robotic uprising that destroys or subjugates humanity is quite low. As pointed out previously, the primary argument that robots will take over the world is that they will eventually be able to design and manufacture themselves in large numbers thereby activating the inevitability of evolution. Once evolution starts to run its course humanity is out of the loop and will eventually be rendered superuous. On the surface this seems like a perfectly logical argument that strikes right at the heart of the Frankenstein Complex. However, there are several key assumptions that must hold in order for this scenario to unfold as stated.
First, there is the underlying assumption that large numbers of highly intelligent robots will be desired by humans. At rst, this might seem reasonable. Why wouldnt we want lots of robots to do all the housework, dangerous jobs, or any menial labor? If those jobs require higher-order intelligence to accomplish,
ARMAGEDDON AND THE THREE LAWS 155
then we already have a general-purpose machine that is cheaply produced and in abundance humans. If they do not require higher-order intelligence, then a machine with some intelligence can be built to handle that specic job more economically than a highly intelligent general robot. In other words, we may have smarter devices that take over some jobs and make others easier for humans, but those devices will not require enough intelligence to even evoke a serious discussion of their sentience. Take, for example, the popular Rumba vacuuming robot produced by iRobot (iRobot Corporation: Home Page 2006). Most vacuum cleaners manufactured today must take into account that the user is a human thereby dictating certain constraints on the overall design. There must be a handle, it must be relatively light weight, the button(s) must be easy to reach and understandable by humans, etc. The Rumba along with any future versions of such a machine, however, does not have to follow these constraints. The current models simply roll around randomly trying not to get stuck or fall down stairs. When it gets low on batteries, the robot follows a RF beacon back to its recharging station. From the perspective of robotics researchers, the Rumba is a very dumb robot. Even so, it gets the job done. From an engineering perspective, this is an eective tool for its designed task. Future versions might add a memory and mapping function that would make it more ecient at cleaning all areas within a reasonable time rather than relying on random chance, but the fact remains that this task does not require higher-order intelligence.
One might argue that a general-purpose robot would be able to use the currently existing tools and would, therefore, be more cost aective in the long run. Such a robot would necessarily be humanoid due to the fact that our existing tools are designed for human use. It would also need the dexterity, balance, and ability to adapt to changing situations of a human. This kind of a robot would be much more complex than any motor vehicle currently on the road and would likely cost at least as much as a luxury car. This automatically relegates it to a much smaller market. On the other hand, a specialized semi-autonomous robot, like the Rumba, for a specic task does not need to cost that much more than a comparable device to be used by a human. The current price for a Rumba ranges from $129 to $349 while a standard vacuum cleaner will cost anywhere from $129 to $549 at your local department or home improvement store. If the average person needs to buy a new vacuum cleaner and they could purchase either a standard vacuum, a Rumba-like robot, or a standard vacuum and a very expensive general-purpose robot, which option is most likely to be
appealing? Although the verdict is still out on whether a standard vacuum or a Rumba-like system will grace most households, only the very rich would even seriously consider the third option. This same logic holds for any appliance you may need in your home. Consequently, we will see the mass production of dumb but smart enough devices, but not general-purpose robots or articial intelligences. This is not to say that we wont create in some lab a human-level articial intelligence. We will do it because we can. These will be expensive research oddities that will get a lot of attention and raise all of the hard philosophical questions, but their numbers will be low and they will be closely watched because of their uniqueness.
Another assumption underlying our doomsday of reproducing robots is that humans would never actually check to see if the robots produced deviated from the desired output. Especially if they are being mass produced, this seems quite out of the question. Approximately 280 cars were sacriced to crash tests alone in 2006 just by the Insurance Institute for Highway Safety and National Highway Trac Safety Administration. Every model sold in the United States undergoes a huge battery of tests before it is allowed on the streets. Why would robots be any less regulated? This further reduces the chances of evolutionary style mutations. Of course there will still be defects that crop up for a given robot that did not show up in the tests just as with automobiles. Also, just as with automobiles, these defects will be dealt with and not passed on to future generations of robots.
Finally, the assumption is made that evolution will occur on an incredibly fast time scale. There are a couple of ways that this might come about. One argument goes that since these machines will be produced at such a high rate of speed, evolution will happen at a predacious rate and that it will catch humans by surprise. How fast would intelligent robots evolve? In 1998 just fewer than 16.5 million personal computers were manufactured in the US. While computer components are built all around the world, the vast majority are assembled in the US. For arguments sake, lets say that the worlds production of computers is twice that, some 33 million. Lets also assume that that number has quadrupled since 1998 to 132 million computers manufactured worldwide in one year. These are moderately complex machines created at a rate at least as fast as our future intelligent robots might be. In 2006, there were more than 130 million human births on the planet, about equal to our hypothetical number of computers produced. Evolution works, outside of sexual reproduction, by making mistakes during the copying of one
156
LEE MCCAULEY
individual a mutation. If we assume that our manufacturing processes will make mistakes on par with biological processes, then the evolution of our reproducing machines will be roughly equal to that of human evolution if one discounts the eect of genetic crossover via sexual reproduction. Furthermore, each robot produced must have all of the knowledge, capability, resources and time to build more robots, otherwise the mutations dont propagate and evolution goes nowhere. Why would we give our house-cleaning robot the ability to reproduce on its own? Even if we allowed for the jumpstarting of the process by already having a fairly intelligent robot running the manufacturing show, this would be comparable to starting with an Australopithecus and waiting to come up with a Homo sapiens sapiens. To sum up, if we start with a fairly intelligent seed robot that can reproduce, and it builds copies of itself, and each one of the copies builds copies of themselves on and on to create large numbers of reproducing robots, then it will take thousands of years for the process to create any meaningful changes whatsoever, much less a dominant super species. There are no likely circumstances under which this sort of behavior would go on unchecked by humans.
Up to this point we have discussed primarily physical robots. Just as much fear may exist for general articial intelligence (AI). AI is a broad term typically encompassing any human-made system that performs tasks considered to require some level of intelligence. Robotics, therefore, is a subclass of AI that includes physical animation. There is also a fuzzy area in which an AI controls a simulated rather than physical robot, but from the AIs perspective there is really no dierence. One possible way of increasing the rate of evolution might be to use a more directed approach that focuses on the AI rather than the physical robotic embodiment. Humans could build a robot or AI with the sole task of designing and building a new AI that is better at designing and building AI which builds another AI, etc. This directed evolution is likely to be much quicker and is also likely to be something that an AI researcher might try. This would also be a very expensive endeavor. Even if the AIs body is virtual, there must still be a physical computer to house it. Either the AI is custom built in hardware with each successive version or it is created in a virtual manner and run within some larger system. This system would likely need to be quite large if the AI is intended to be truly intelligent. As the versions become more and more adept and complex, the system that houses the AI would need to be increasingly complex and ultimately a proprietary machine would need to be created whose purpose would be to run the AI. We are
then back to the hardware versions and progression from there. Another problem with this notion is that the very rst AI in this chain, since we are not using evolutionary processes, will need a great deal of knowledge regarding the nature of intelligence in order to eectively guide the development. In other words, there must be some criteria against which the AI can determine its own success or failure and then have enough knowledge of its own construction to purposefully design a new version of itself. Solving the problem of creating a truly intelligent machine using this method is, therefore, a catch 22; we would have to already know how to create an intelligent machine before we could create a machine to create intelligence.
One might still argue that this software-only AI could be implemented using some form of learning or genetic algorithm based on some general intelligence measure. Even if this is implemented at some point in the future, it is not something that will be accessible by your everyday hacker due to the cost and will, therefore, be relegated to a small number of academic institutions or corporations. The system needed to run this genetic algorithm would need hundreds or thousands of times more computational resources than the one described in the previous paragraph. Here we are talking not about a single intelligent entity redesigning itself; instead, we are talking about hundreds of such entities evolving in a virtual environment (a real environment would imply physical embodiment). For evolution to occur in some meaningful way, this environment would need to be just as complex and dynamic as the real world with appropriate sensory/motor interactions possible by the evolving agents. Just running the environment in real-time would require huge amounts of computing power not to mention the huge amounts needed for each of the agents present in any given generation.
Even so, if we were to create some form of intelligent creature in such a manner, would it be dangerous? The rst requirement would be that there is some connection between the virtual and the real world. Secondly, the manipulation within the real world must be part of the tness function of the evolving individuals. Why is this the case? For any evolutionary process, there must be a payo for any sustained environmental adaptation. Without a payo, evolutionary processes will steer the development of a population towards some other conguration that does have a payo. For articial genetic algorithms this is encapsulated in the tness function. Even if the tness function is essentially a survival of the ttest there must be some dependency on actions in the real world for survival otherwise the resulting agents will not have any ability to interact in the real
ARMAGEDDON AND THE THREE LAWS 157
world and, therefore, would be of no threat. For a more detailed discussion of evolutionary algorithms and genetic processes see (Holland 1975). The only scenario that does not ultimately involve a physical presence would be evolving agents whose environment or part of it is the Internet. While there is potential for great harm here, it is likely to be on the scale of a particularly nasty computer virus, not a world-ending cataclysm.
Is there some set of circumstances in which a genetic algorithm could evolve in some simulation of a subset of the real world that, nonetheless, has the potential to obliterate humanity? Yes. To barrow one of the few plausible scenarios from Hollywood, a situation like the one presented in the 1983 movie WarGames is loosely possible. In this movie, a computer system is designed to continuously play and learn from a realistic game of thermonuclear war with switches built in so that, in the event of a real war, it could take over the launching of the missiles. At one point, the computer gets confused between the game and reality and begins the countdown to the launch of the real nuclear missiles. This example demonstrates the need for a virtual environment that in some way mirrors the real world that also possesses some connection between the system and the real world. If cases such as this present themselves, it is the AI creators responsibility to consider the ramications of what they create.
Note that to have any real possibility of destroying humanity, even by accident, the AI had to ultimately have some physical way of causing harm. At the point in the WarGames movie when the AI takes control of the missile launch sequence, it has become a form of physical robot. For this reason, the discussion that follows applies both to fully autonomous robots and to largely disembodied AI that, nonetheless, have some way to aect the real world.
The Three Laws of Robotics
As demonstrated in the preceding section, there is still the possibility of great harm however remote. Also, even if the Frankenstein Complex is largely unfounded with regards to the destruction of humanity, there is still the visceral fear that individuals have with regards to the robot standing in front of them the one that could malfunction and hurt them. This sort of situation is much more likely. How can we, as robotics researchers, dissuade this fear? Isaac Asimov, while still a teenager, noticed the recurring theme of man builds robot robot kills man in literature of the time and felt that this was not the way that such an advanced technology would
unfold. He made a conscious eort to combat the Frankenstein Complex in his own robot stories (Asimov 1990).
What are the three laws?
Beyond just writing stories about good robots, Asimov imbued them with three explicit laws rst expressed in print in the story, Runaround (Asimov 1942):
1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.2. A robot must obey the orders given to it by human beings, except where such orders would conict with the First Law.
3. A robot must protect its own existence, as long as such protection does not conict with the First or Second Law.
Asimovs vision
Many of Asimovs robot stories were written in the 1940s and 1950s before the advent of the modern electronic computer. They tend to be logical mysteries where the characters are faced with an unusual event or situation in which the behavior of a robot implanted with the Three Laws of Robotics is of paramount importance. Every programmer has had to solve this sort of mystery. Knowing that a computer (or robot) will only do what it is told, the programmer must determine why it didnt do what he or she told it to do. It is in this way that Asimov emphasizes both the reassuring fact that the robots cannot deviate from their programming (the Laws) and the limitations of that programming under extreme circumstances.
Immutable A key factor of Asimovs Three Laws is that they are immutable. He foresaw a robot brain of immense complexity that could only be built by humans at the mathematical level. Therefore, the Three Laws were not the textual form presented above, but were encoded in mathematical terms directly into the core of the robot brain. This encoding could not change in any signicant manner during the course of the robots life. In other words, learning was something that a robot did only rarely and with great diculty. Instead, Asimov assumed that a robot would be programmed with everything it needed to function prior to its activation. Any additional knowledge it needed would be in the form of human commands and the robot would not express any ingenuity or creativity in the carrying out of those commands.
158
LEE MCCAULEY
The one exception to this attitude can be seen in The Bicentennial Man (Asimov 1976). In the story, Andrew Martin is a robot created in the early days of robots when the mathematics governing the creation of the robots positronic brain was imprecise. Andrew is able to create artwork and perform scientic exploration. A strong plot element is the fact that, despite Andrews creativity, he is still completely bound by the Three Laws culminating at one point in a scene where two miscreants almost succeed in ordering Andrew to dismantle himself. The point being made is that, if a friend had not arrived on the scene in time, the robot would have been forced by the Three Laws to obey the order given to it by a human even at the unnecessary sacrice of its own life. Even with the amazing accomplishments of this imaginative robot, Andrew Martin, the ctional company that built him saw his very existence as an embarrassment solely because of the fear that his intellectual freedom fueled in the general populace the Frankenstein Complex. If a robot could be intellectually creative, couldnt it also be creative enough to usurp the Three Laws?
Asimov never saw this as a possibility although he did entertain the eventual addition of a zeroth law that was essentially a rewrite of the rst law with the word human replaced with humanity. This allowed a robot with the zeroth law to harm or allow a human to come to harm if it was, in its estimation, to the betterment of humanity (Asimov 1985). Asimovs image for the near future of robotics, however, viewed robots as complicated tools and nothing more. As with any complicated machine that has the potential of harming a human during the course of its functioning, he assumed that the builders had the responsibility of providing appropriate safeguards (Asimov 1978). One would never think of creating a band saw or a nuclear reactor without reasonable safety features.
ExplicitFor Asimov, the use of the Three Laws was just that simple; they were an explicit elaboration of implicit laws already in eect for any tool humans have ever created (Asimov 1978). He did not, however, think that robots could not be built without the Three Laws. Asimov simply felt that reasonable humans would naturally include them by whatever means made sense whether they had his Three Laws in mind or not. A simple example would be the emergency cuto switch found on exercise equipment and most industrial robots being tended by humans. There is a physical connection created between the human and the machine. If the human moves out of a safety zone, then the connection is broken and the machine
shuts o. This is a simplistic form of sensor designed
to convey when the machine might injure the human.
Each machine or robot can be very dierent in its function and structure; therefore, the mechanisms employed to implement the Three Laws are necessarily dierent. Asimovs denition of a robot was somewhat homogeneous in that their shape was usually human-like and their positronic brains tended to be mostly for general-purpose intelligence. This required that the Three Laws be based exclusively in the mechanism of the brain less visible to the general public.
Despite Asimovs rm optimism in science and humanity in general, the implementation of the Three Laws in their explicit form and, more importantly, public belief in their immutability was a consistent struggle for the characters in his stories. It was the explicit nature of the Three Laws that made the existence of robots possible by directly countering the Frankenstein Complex. Robots in use today are far from humanoid and their safety features are either clearly present or their function is not one that would endanger a human. The Rumba vacuum robot (iRobot Corporation: Home Page 2006) comes to mind as a clear example. One of the rst household use robots, the Rumba is also one of the rst to include sensors and behaviors that implement at least some of the Three Laws: it uses a downward pointing IR sensor to avoid stairs and will return to its charging station if its batteries get low. Otherwise, the Rumbas nature is not one that would endanger a person.
Current opinions
Asimov believed that the Three Laws were being taken seriously by robotics researchers of his day and that they would be present in any advanced robots as a matter of course. In preparation for this writing, a handful of emails were sent out asking current robotics and articial intelligence researchers what their opinion is of Asimovs Three Laws of Robotics and whether the laws are implementable. Not a single respondent was unfamiliar with the Three Laws and several seemed quite versed in the nuances of Asimovs stories. From these responses it seems that the ethical use of technology and advanced robots in particular is very much on the minds of researchers. The use of Asimovs laws as a way to answer these concerns, however, is not even a topic of discussion except, perhaps, in South Korea (Lovgren 2007) and Japan (Christensen 2006). Despite the familiarity with the subject, it is not clear whether many robotics researchers have ever given much thought to the Three Laws of Robotics from a professional
ARMAGEDDON AND THE THREE LAWS 159
standpoint. Nor should they be expected to. Asimovs Three Laws of Robotics are literary devices and not engineering principles any more than his ctional positronic brain is based on scientic principles. Whats more, many of the researchers responding pointed out serious issues with the laws that may make them impractical to implement.
Ambiguity
By far the most cited problem with Asimovs Three Laws is their ambiguity. The rst law is possibly the most troubling as it deals with harm to humans. James Kuner, Assistant Professor at The Robotics Institute of Carnegie Mellon University, replied in part:
The problem with these laws is that they use abstract and ambiguous concepts that are dicult to implement as a piece of software. What does it mean to come to harm? How do I encode that in a digital computer? Ultimately, computers today deal only with logical or numerical problems and results, so unless these abstract concepts can be encoded under those terms, it will continue to be dicult (e.g., Kuner, personal communications).
Doug Blank, Associate Professor of Computer Science at Bryn Mawr College, expressed a similar sentiment:
The trouble is that robots dont have clear-cut symbols and rules like those that must be imagined necessary in the sci- world. Most robots dont have the ability to look at a person and see them as a person (a human). And that is the easiest concept needed in order to follow the rules. Now, imagine that they must also be able to recognize and understand harm, intentions, other, self, self-preservation, etc, etc, etc. (e.g., Blank, personal communications)
While Asimov never intended for robots with the Three Laws to be required to understand the English form, the point being made above is quite appropriate. It is the encoding of the abstract concepts implied in the laws within the huge space of possible environments that seems to make this task insurmountable. Many of Asimovs story lines emerge from this very aspect of the Three Laws even as many of the ner points are glossed over or somewhat nave assumptions are made regarding the cognitive capacity of the robot in question. A word encountered by a robot as part of a command, for example, may have a dierent meaning in dierent contexts. This means that a robot must use some internal judgment in order to disambiguate the term and then
determine to what extent the Three Laws apply. As anyone that has studied natural language understanding (NLU) can tell you, this is by no means a trivial task in the general case. The major underlying assumption is that the robot has an understanding of the universe from the perspective of the human giving the command. Such an assumption is barely justiable between two humans, much less a human and a robot.
Understanding the eect of an action
In the second novel of Asimovs Robots Series, The Naked Sun, the main character, Elijah Baley points out that a robot could inadvertently disobey any of the Three Laws if it is not aware of the full consequences of its actions (Asimov 1957). While the character in the novel rightly concludes that it is impossible for a robot to know the full consequences of its actions, there is never an exploration of exactly how hard this task is. This was also a recurring point made by several of those responding. Doug Blank, for example, put it this way:
[Robots] must be able to counterfactualize about all of those [ambiguous] concepts, and decide for themselves if an action would break the rule or not. They would need to have a very good idea of what will happen when they make a particular action (e.g., Blank, personal communications).
Aaron Sloman, Professor of Articial Intelligence and Cognitive Science at The University of Birmingham, described the issue in a way that gets at the sheer immensity of the problem:
Another obstacle involves potential contradictions as the old utilitarian philosophers found centuries ago: what harms one may benet another, etc., and preventing harm to one individual can cause harm to another. There are also conicts between short term and long term harm and benet for the same individual (e.g., Sloman, personal communications; Sloman 2006).
David Bourne, a Principal Scientist of Robotics at Carnegie Mellon, put it this way:
A robot certainly can follow its instructions, just the way a computer follows its instructions. But, is a given instruction going to crash a program or drive a robot through a human being? In the absolute, this answer is unknowable! (e.g., Bourne, personal communications)
It seems, then, we are asking that our future robots be more than human they must be omniscient. More than omniscient, they must be able to make value
160
LEE MCCAULEY
judgments on what action on their part will be most benecial (or least harmful) to a human or even humanity in general. Obviously we must settle for something that is a little more realistic. General attitudes
Even though Asimov attempted to answer these issues in various ways in multiple stories and essays, the subjects of his stories always involved humanoid robots with senses and actions at least as good as and often better than humans. This aspect tends to suggest that we should expect capabilities that are on par with humans. Asimov encouraged this attitude and even expressed through his characters that a humaniform robot (one that is indistinguishable externally from a human) with the Three Laws could also not be distinguished from a very good human through its actions. To put it simply if Byerley [the possible robot] follows all the Rules of Robotics, he may be a robot, and may simply be a very good man, as spoken by Susan Calvin in the 1946 story, Evidence (Asimov 1946). Furthermore, Asimov often has his characters espouse how safe robots are. They are, in Asimovs literary universe, almost impossibly safe.
It is possibly the specter of this essentially unreachable goal that has made Asimovs Three Laws little more than an imaginative literary device in the minds of present-day robotics researchers. Maja Mataric, Founding Director of the University of Southern California Center for Robotics and Embedded Systems, said, [the Three Laws of Robotics are] not something that [are] taken seriously enough to even be included in any robotics textbooks, which tells you something about [their] role in the eld (e.g., Mataric, personal communications). This seems to be the implied sentiment from all of the correspondents despite their interest in the subject.
Aaron Sloman, however, goes a bit further and brings up a further ethical problem with Asimovs three laws:
I have always thought these were pretty silly: they just express a form of racialism or speciesism.
If the robot is as intelligent as you or I, has been around as long as you or I, has as many friends and dependents as you or I (whether humans, robots, intelligent aliens from another planet, or whatever), then there is no reason at all why it should be subject to any ethical laws that are dierent from what should constrain you or me (e.g., Sloman, personal communications; Sloman 2006).
It is Slomans belief that it would be unethical to force an external value system onto any creature, articial
or otherwise, that has something akin to human-level or better intelligence. Furthermore, he does not think that such an imposed value system will be necessary:
It is very unlikely that intelligent machines could possibly produce more dreadful behavior towards humans than humans already produce towards each other, all round the world even in the supposedly most civilized and advanced countries, both at individual levels and at social or national levels.
Moreover, the more intelligent the machines are the less likely they are to produce all the dreadful behaviors motivated by religious intolerance, nationalism, racialism, greed, and sadistic enjoyment of the suering of others.
They will have far better goals to pursue (e.g., Sloman, personal communications; Sloman 2006).
This same sentiment has been expressed previously by Sloman and others (Sloman 1978; Worley 2004). These concerns are quite valid and deserve discussion well beyond the brief mention here. At the current state of robotics and articial intelligence, however, there is not much danger of having to confront these particular issues in the near future as they apply to human-scale robots. For the remainder of this paper, we will, therefore, be discussing only robots that do not posses human-level intelligence or anything resembling it.
The three laws of tools
So, to recap, we have shown that a species-ending catastrophe at the hands of a malicious AI or robot horde is innitesimally small if not impossible. A large-scale accident because of a malfunctioning AI or robot might be possible but highly unlikely due to the very small number of situations that would make this possible. Instead, the future will contain a large number of much less intelligent but quite capable robotic devices. A small-scale accident at the hands of a malfunctioning robot, however, might very well be possible. We have looked to Isaac Asimovs Three Laws of Robotics as a way to address the Frankenstein Complex that the general public is likely to experience when faced with this new breed of robotic tools, but found that they cannot be directly or fully implemented due to their inherent ambiguity and the high level of intelligence needed to deal with it.
To apply the Three Laws to these less intelligent robotic devices, two questions must be asked: How
ARMAGEDDON AND THE THREE LAWS 161
much disambiguation should we expect and at what level should a robot understand the eect of its actions? The answer to these questions may have been expressed by Asimov, himself. It was his belief that when robots of human-level intelligence are built they will have the Three Laws. Not just something like the Three Laws, but the actual three laws (Asimov 1990). At rst glance this seems like a very bold and egotistical statement. While Asimov was less than modest in his personal life, he argued that the Three Laws of robotics are simply a specication of implied rules for all human tools. He stated them as follows (Asimov 1990):
1. A tool must be safe to use.2. A tool must perform its function, provided it does so safely.
3. A tool must remain intact during use unless its destruction is required for safety or unless its destruction is part of its function.
From this perspective, the answers to both of the questions expressed earlier in this section emerge. How much disambiguation should we expect? Whatever level makes sense to the level of knowledge for the robot in question. At what level should a robot understand the eect of its actions? To whatever level is appropriate for its level of knowledge. Yes, these may seem like broad answers that give us nothing specic and, therefore, nothing useful. However, given a specic robot with specic sensors, specic actuators and a specic function these answers become useful. We are no longer faced with the prospect of having to create a god-like robot whose function is to vacuum our oors; instead, we are let o the hook so to speak. Our robot only has to perform in accordance with the danger inherent in its particular function. It might, for example, be reasonable to expect that the Rumba vacuuming robot have a sensor on its lid that detects an approaching object (possibly a foot) and moves quickly to avoid being stepped on or otherwise damaged. This satises both the rst and the third laws of robotics without requiring that the robot positively identify the approaching object as a human appendage. The third law is satised by allowing the robot to avoid damage while the rst law is upheld by reasonably attempting to avoid making a person fall and hurt themselves. There is no need for complete knowledge or even positive identication of a person, only knowledge enough to be reasonably safe given the robots inherent danger. To uphold the Laws does not require a fully autonomous robot with full human capacities, it only requires that something of human-level intelligence, possibly a human, takes responsibility for upholding the laws within the particular robot being produced.
Other questions
We have now explored the application of the Three Laws at both extremes of a human-level-intelligent robot to a purely reactive one. While the rst extreme of human-level intelligence is worthy of discussion from a philosophical standpoint and must be addressed at some point (see Sloman 2006), it is likely to be many decades before AI will have progressed to a point where these issues are pressing. For the purposes of this paper, we will politely table the question of the point at which our intelligent tools become sentient or at least sentient enough to be subject to the issues Sloman suggests. On the other hand, the eld has progressed quite a ways from the days of hard-coded rules found in reactive systems. The following questions, therefore, can be addressed with respect to smart but not sentient robots.
Should the laws be implemented?
By whatever method is suitable for a specic robot and domain, yes. To do otherwise would be to abdicate our responsibility as scientist and engineers. The more specic question of which laws should be implemented arises at this point. Several people have suggested that Asimovs Three Laws are insucient to accomplish the goals to which they are designed (Ames 2004; Clarke 1994; Sandberg 2004) and some have postulated additional laws to ll some of the perceived gaps (Clarke 1994). For instance, Clarkes revamped laws are as follows:
The Meta-Law A robot may not act unless its actions are subject to the Laws of Robotics.
Law ZeroA robot may not injure humanity, or, through inaction, allow humanity to come to harm.
Law One A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate a higher-order Law. Law Two A robot must obey orders given it by human beings, except where such orders would conict with a higher-order Law.A robot must obey orders given it by superordinate robots, except where such orders would conict with a higher-order Law.
Law Three A robot must protect the existence of a superordinate robot as long as such protection does not conict with a higher-order Law.A robot must protect its own existence as long as such protection does not conict with a higher-order Law.
162
LEE MCCAULEY
Law Four A robot must perform the duties for which it has been programmed, except where that would conict with a higher-order law.
The Procreation Law A robot may not take any part in the design or manufacture of a robot unless the new robots actions are subject to the Laws of Robotics.
These laws, like Asimovs originals, are intended to be interpreted in the order presented above. Note that laws two and three are broken into two separate clauses that are also intended to be interpreted in order. So Asimovs three laws, plus the zeroth law added in Robots and Empire (Asimov 1985), are expanded here into nine if the sub-clauses are included. Clarke left most of Asimovs stated four laws intact, disambiguated two, and added three additional laws.
There are still problems even with this more specic set. For example, the Procreation Law is of the least priority subordinate to even the fourth law stating that a robot has to follow its programming. In other words, a robot could be programmed to create other robots that are not subject to the Laws of Robotics or be told to do so by a human or other superordinate robot pursuant to Law Two. Even if we reorder these laws, situations will still arise where other laws have precedent. There doesnt seem to be any way of creating foolproof rules at least as stated in English and interpreted with the full capacities of a human.
Are the laws even necessary?
What good, then, are even a revised set of laws if they cannot be directly put into practice?1 Luckily, our robots do not need the laws in English and will not, at the moment, have anything close to the full capacity of a human. It is still left to human interpretation as to how and to what level to implement the Laws for any given robot and domain. This is not likely to be a perfect process. No one human or even group of humans will be capable of determining all possible situations and programming for such. This problem compounds itself when the robot must learn to adapt to its particular situation.
We could rely on the government and the legal system to produce laws dictating a code of conduct for robot manufacturers and spelling out legal con-
sequences as is being done in Japan (Christensen 2006), but this sets the wrong tone. In the publics mind roboticists and AI researchers would, by default, be likely of committing the crimes addressed by the laws. This would not help alleviate the Fran-kenstein Complex but would exacerbate it. Instead, people involved in the research and development of intelligent machines, be they robots or some other form of articial intelligence, need to each make a personal commitment to be responsible for their creations something akin to the Hippocratic Oath taken by medical doctors. Not surprisingly, this same sentiment was expressed by Bill Joy, scientists and engineers [need to] adopt a strong code of ethical conduct, resembling the Hippocratic oath (Joy 2000) The modern Hippocratic Oath used by most medical schools today comes from a rewrite of the ancient original and is some 341 words long (Lasagna 1964). A further rewrite is presented here intended for Roboticists and AI Researchers in general:
I swear to fulll, to the best of my ability and judgment, this covenant:
I will remember that articially intelligent machines are for the benet of society and will strive to contribute to that society through my creations.
Every articial intelligence I have a direct role in creating will follow the spirit of the following rules:
1. Do no harm to humans either directly or through non-action.
2. Do no harm to myself either directly or through non-action unless it will cause harm to a human.
3. Follow the orders given me by humans through my programming or other input medium unless it will cause harm to myself or a human.
I will not take part in producing any system that would, itself, create an articial intelligence that does not follow the spirit of the above rules.
The Roboticists Oath has a few salient points that should be discussed further. The overarching intent is to convey a sense of ones connection and responsibility to humanity along with a reminder that robots are just complex tools, at least until such point as they are no longer just tools. When that might be or how we might tell is left to some future determination. The Oath then includes a statement that the researcher will always instill in their creations the spirit of the three rules that follow. The use of the word spirit here is intentional. In essence, any AI Researcher or Roboticist should understand the intent of the three rules and make every reasonable
1 Once again, we are politely sidestepping Slomans meaning of this question, which suggests that an imposed ethical system will be unnecessary for truly intelligent machines.
ARMAGEDDON AND THE THREE LAWS 163
eort to implement them within their creations. The rules themselves are essentially a reformulation of Asimovs original Three Laws with the second and third law reversed in precedence.
Why the reversal? As Asimov, himself, points out in The Bicentennial Man (Asimov 1976), a robot implementing his Laws could be forced to dismantle themselves for no reason other than the whim of a human. In that story, the main character, a robot named Andrew Martin, successfully lobbies congress for a human law that makes such orders illegal. Asimovs purpose in making the self-preservation law a lower priority than obeying a human command was to allow humans to put robots into dangerous situations when such was necessary. The question then becomes whether any such situation would arise that would not also involve the possible harm to a human. While there may be convoluted scenarios when a situation like this might occur, there is a very low likelihood. There is high likelihood, on the other hand, as Clarke pointed out (Clarke 1993, 1994), that humans would give a robot instructions that, inadvertently, might cause it harm. In software engineering it is one of the more time consuming requirements that code must have sucient error checking. This is often called idiot-proong ones code. Without such eorts, users would be providing incorrect data, inconsistent data, and generally crashing systems on a recurring basis. It is the software engineers responsibility to reduce the occurrence of these mistakes.
The Roboticists Oath also leaves out the zeroth law. For Asimov, it is clear that the zeroth law, even more than the others, is a literary device created by a very sophisticated robot (Asimov 1985) in a story written some four decades after the original Three Laws. Furthermore, such a law would only come into play at such point when the robot could determine the good of humanity. If or when a robot can make this level of distinction, it will have gone well beyond the point where it is merely a tool and the use of these kinds of rules should be re-examined (Sloman 2006). Finally, if an articial intelligence were created that was not sophisticated enough to make the distinction itself, yet would aect all of humanity, then the Oath requires that the creators determine the appropriate safety measures with the good of society in mind.
A form of Clarkes procreation law (Clarke 1994) has been included in the Roboticists Oath, but it has been relegated to the responsibility of humans. The purpose of such a law is evident. Complex machines manufactured for general use will, inevitably, be constructed by robots. Therefore, Clarke argues, a law against creating other robots that do not follow the Laws is necessary. Unfortunately, such a law is
not implementable as an internal goal of a robot. The constructing robot, in this case, must have the ability to determine that it is involved in creating another robot and have the ability to somehow conrm whether the robot it is constructing conforms to the Laws. The only situation where this might be possible is when a robots function includes the testing of robots after they are completed and before being put into operation. It is, therefore, pursuant to the human creators to make sure that their manufacturing robots are creating robots that adhere to the rules stated in the Oath.
Will even widespread adherence to such an oath prevent all possible problems or abuses of intelligent machines? Of course not. As with medical doctors there may still need to be laws on the books to deal with robotics malpractice, but it will reduce occurrences and give the general public an added sense of security and respect for practitioners of the science of articial intelligence in much the same way as the Hippocratic Oath does for physicians. Is the Roboticists Oath necessary? Probably not, if one only considers the safety of the machines that might be built. Those in this eld are highly intelligent and moral people that would likely follow the intent of the oath even in its absence. However, it is important in setting a tone for young researchers and the public at large.
The future
Many well known people have told us that the human race is doomed to be supplanted by our own robotic creations. Hollywood and the media sensationalize and fuel our fears because it makes for an exciting story. However, when one analyzes the series of improbable events that must occur for this to play out, it becomes obvious that we are quite safe for the following reasons:
1. It is not likely that the majority of the public will pay the very high price tag for a general-purpose robot when a set of much less expensive, less intelligent devices will perform the same tasks just as automatically.
2. Evolution of physical robots could not happen any faster than human evolution and it would require millions of robots to be created every year without human oversight of any kind.
3. Evolution occurs because of a lack of resources and/or competition. Evolutionary pressures for human-level intelligence and the development of some way of aecting the real world in some meaningful way could not occur in a virtual environment. Outside of a virtual environment appeals to #2 above.
164
LEE MCCAULEY
4. Any robots being mass produced will be put through the same scrutiny as automobiles with hundreds sacriced each year to test for safety and recalls issued for even minor problems discovered after deployment.
Evolution cannot occur under these circumstances through anything other than normal human redesign. Unfortunately, there is still the possibility of technology misuse and irresponsibility on the part of robotics and AI researchers that, while not resulting in the obliteration of humanity, could be disastrous for the people directly involved. For this reason, Bill Joys call for scientists and engineers to have a Hippocratic Oath (Joy 2000) has been taken up for roboticists and researchers of articial intelligence. The Roboticists Oath calls for personal responsibility on the part of researchers and to instill in their creations the spirit of three rules stemming from Isaac Asimovs original Three Laws of Robotics.
The future will be lled with smart machines. In fact they are already all around you, in your car, in your cell phone, at your bank, and even in the microwave that senses when the food is properly cooked and just keeps it warm until you are ready to eat. These will get smarter but not sentient, not alive. A small number of robots in labs may achieve human-level or better intelligence, but these will be closely studied oddities. Can the human race still destroy itself? Sure, but not through articial intelligence. Humanity must always be wary of its power and capability for destruction. It must also not fear the future with or without intelligent robots.
References
M.R. Ames. 3 Laws Dont Quite Cut It [Electronic Version]. 3 Laws Unsafe from http://www.asimovlaws.com/articles/ archives/2004/07/3_laws_dont_qui.html, 2004.I. Asimov. Runaround. Astounding Science Fiction, March 1942.
I. Asimov. Evidence. Astounding Science Fiction, March 1946.
I. Asimov. The Naked Sun. Doubleday, 1957.I. Asimov. The Bicentennial Man. In Stellar Science Fiction, February ed. Vol. 2. 1976.
I. Asimov. The Machine and the Robot. In P.S. Warrick, M.H. Greenberg and J.D. Olander, editors, Science Fiction: Contemporary Mythology. Harper and Row, 1978.
I. Asimov, Robots and Empire. Doubleday & Company, Garden City, 1985.
I. Asimov. The Laws of Robotics. In Robot Visions, pp. 423425. ROC, New York, NY, 1990.
B. Christensen. Asimovs First Law: Japan Sets Rules for Robots [Electronic Version]. LiveScience from http:// www.livescience.com/technology/060526_robot_rules.html, 2006.
R. Clarke. Asimovs Laws of Robotics: Implications for Information Technology, Part 1. IEEE Computer, 26(12): 5361, 1993.
R. Clarke. Asimovs Laws of Robotics: Implications for Information Technology, Part 2. IEEE Computer, 27(1): 5765, 1994.
D. Cohn. AI Reaches the Golden Years. Wired Retrieved July 17, 2006, from http://www.wired.com/news/technology/0,71389-0.html?tw=wn_index_2, 2006.
J. Holland, Adaptation in Natural and Articial Systems. University of Michigan Press, Ann Arbor, MI, 1975.
iRobot Corporation: Home Page. Retrieved July, 19, 2006, from http://www.irobot.com, 2006.
D.G. Jerz. R.U.R. (Rossums Universal Robots) [Electronic Version]. Retrieved June 7, 2006 from http:// www.jerz.setonhill.edu/resources/RUR/, 2002.
B. Joy. Why the future doesnt need us. Wired, 8.04, April 2000.
R. Kurzweil. The Age of Spiritual Machines. Viking Adult, 1999.
R. Kurzweil. The Singularity is Near: When Humans Transcend Biology. Viking Books, 2005.
L. Lasagna. Hippocratic OathModern Version. Retrieved June 30, 2006, from http://www.pbs.org/ wgbh/nova/doctors/oath_modern.html, 1964.
S. Lovgren. Robot Code of Ethics to Prevent Android Abuse, Project Humans. National Geographic News. March 16, 2007.
H.P. Moravec, Robot: Mere Machine to TranscendentMind. Oxford University Press, Oxford, 1998.A. Sandberg. Too Simple to Be Safe [Electronic Version]. 3 Laws Unsafe. Retrieved June 9, 2006 from http:// www.asimovlaws.com/articles/archives/2004/07/too_simple_to_b.html, 2004.
M. Shelley, Frankenstein, or The Modern Prometheus. Lackington, Hughes, Harding, Mavor & Jones, London, UK, 1818.
A. Sloman. The Computer Revolution in Philosophy: Philosophy, Science and Models of Mind. Harvester Press, 1978.
A. Sloman. Why Asimovs Three Laws of Robotics are Unethical. Retrieved June 9, 2006, from http:// www.cs.bham.ac.uk/research/projects/coga/misc/asimov-three-laws.html, 2006.
K. Warwick. I, Cyborg. Century, 2002.G. Worley. Robot Oppression: Unethicality of the Three Laws [Electronic Version]. 3 Laws Unsafe from http:// www.asimovlaws.com/articles/archives/2004/07/ robot_oppressio_2.html, 2004.
Springer Science+Business Media B.V. 2007