Content area
Full text
* From its inception, artificial intelligence (AI) has had a rather ambivalent relationship to humans - swinging between their augmentation and their replacement. Now, as AI technologies enter our everyday lives at an everincreasing pace, there is a greater need for AI systems to work synergistically with humans. To do this effectively, AI systems must pay more attention to aspects of intelligence that help humans work with each other - including social intelligence. I will discuss the research challenges in designing such human-aware AI systems, including modeling the mental states of humans-in-the-loop and recognizing their desires and intentions, providing proactive support, exhibiting explicable behavior, giving cogent explanations on demand, and engendering trust. I will survey the progress made so far on these challenges, and highlight some promising directions. I will also touch on the additional ethical quandaries that such systems pose. I will end by arguing that the quest for human-aware AI systems broadens the scope of AI enterprise; necessitates and facilitates true interdisciplinary collaborations; and can go a long way toward increasing public acceptance of AI technologies.
Artificial intelligence (AI), the discipline we all call our intellectual home, is suddenly having a rather large cultural moment. It is hard to turn anywhere without running into mentions of AI technology and hype about its expected positive and negative societal impacts. AI has been compared with fire and electricity in its overall importance to humanity, and commercial interest in the AI technologies has sky-rocketed. Universities - even high schools - are rushing to start new degree programs or colleges dedicated to AI. Civil society organizations are scrambling to understand the impact of AI technology on humanity, and governments are competing to encourage or regulate AI research and deployment.
There is considerable hand-wringing by pundits of all stripes on whether, in the future, AI agents will get along with us or turn on us. Much is being written about the need to make AI technologies safe and delay the doomsday. I believe that, as AI researchers, we are not (and cannot be) passive observers. It is our responsibility to design agents that can and will get along with us (figure 1). Making such human-aware AI agents, however, poses several foundational research challenges that go beyond simply...





