Tuesday, April 14, 2015

No AI Warning Label Necessary

Editor’s note: Charles Ortiz is the senior principal manager of the Artificial Intelligence and Reasoning Group at the Nuance Natural Language and AI Laboratory.
Notwithstanding the great positive potential for AI, there has recently been a debate in the media and industry, and even in recent movies, regarding the possibility that AI could lead to dangerous super intelligence (SI) – one that might overrun the human race.
The typical argument runs something like this: an AI’s natural evolution will take it along a path through which it will not only reach the level of human intelligence (though this is still a distant goal), but eventually exceed it, continuing until its cognitive abilities are as great when compared to us as ours are to, say, a cat. The denouement to this story then (usually implicitly) draws on the famous “Pascal’s Wager”: either such an SI will be evil or it will be good. If the former, the consequences for the human race would be so incredibly horrible that we dare not risk it.
One way to examine this argument is in terms of a common paradigm for modeling intelligent agents in AI: in terms of how an agent, human or artificial, chooses to act. In AI, the possible actions that an agent can take can be ordered according to the utility or desirability of the relevant goals of that agent. So, if the agent has a goal to get to the other side of a street and it is raining, it might prefer the action of carrying an umbrella if it also assigns higher utility to staying dry.
As AI systems mature, they could become more adept at laying out the problem space and exploring it .
Our first step when faced with the doomsday argument should then be to consider some of the possible consequences of AI, for the good of mankind. It is not unreasonable to suggest that AI has the potential to radically transform the degree to which people could utilize and process information in ways that we simply cannot today. There are varying scales along which this can occur.
In the near term, those everyday actions of ours can be simplified through AI technologies such as machine learning and neural nets to create a world of virtual assistants and human-like interfaces that simplify our use of connected and intelligent devices and systems, and give us access to a wealth of content or information in just a few spoken words or a simple gesture.
Longer term, however, the positive impact for these systems as they mature will be much greater, driving important advancements for society in areas like healthcare, education, the economy and many more.
For instance, AI systems will be able to help doctors with diagnoses much faster or serve as virtual teachers in remote parts of the world with the wealth of Internet knowledge at its fingertips (perhaps even democratizing education where it’s been unavailable or unaffordable). Many AI researchers are driven by the promise of realizing such fulfilling futures. How might such systems assist humans in the realizable future? An analogy with chess suggests one possibility.
A while back, Gary Kasparov proposed a new form of chess in which a person and a computer would play together against another person and a computer. Referred to as “Advanced Chess,” the idea was that the human would provide the creativity while the computer would serve as a tool to explore different options and how they might pan out. As the computer is very good at projecting into the future in detail a human would, at the very least, avoid silly mistakes or blunders.
Analogously, we can imagine AI systems that could work on human problems such as world hunger, as a human’s assistant in exploring different options and their possible consequences. The difficulty is, however, that the chess universe can be simply and completely specified by the position of the pieces on the board and by the strict rules that guide the transition from one position to another. The world is much more complex. Nevertheless, one can imagine that as AI systems mature, they could become more adept at laying out the problem space and exploring it (the board and the rules).
However, part of the doomsday argument also involves the claim that the likelihood that an SI will be evil is simply much greater. This is based on the observation that, with regards to human history at least, greater intelligence has bred greater ambition, in particular, ambition for control and power (despite the fact that psychology research suggests that IQ is negatively correlated with crime among humans). Such beings would, therefore, surely evolve in ways similar to us and seek to dispense with us.
Using the same utility-based analysis used so far (for determining what we should do now based on the likely consequences to humanity) to instead model the behavior of an SI itself, we see that an “evil” SI would approximate a purely self-interested, utility-maximizing ideal that would assign highest utility or desirability to outcomes that benefitted it and not other beings. However, this assumes a very selective view of the space of possible future SIs; in particular, it restricts itself to a single dominant possible future in which an SI is strictly self-interested. But, in fact, there are many other possibilities.
There is simply no a priori reason to believe that AI would necessarily evolve into the self-interested variety rather than into one in which collaborative and helpful concerns would be factored into its utility calculations just as equally. In the latter case, such entities might become motivated to act as our teachers. Alternatively, they might aspire to be our colleagues and work together with us. Even some of the originators of utilitarianism suggested that it be axiomatic that the highest utility be identified with the “greatest happiness of the greatest number.”
And if utilitarianism is construed as having ethical foundations, it should not be unreasonable to suggest that an SI would demonstrate desirable ethical behavior. In such a case, the worst that might happen is that SIs might treat us as their pets; I’m not sure how worrisome that might be – my daughter’s cat doesn’t seem to have such a bad life: sleep, eat, get a massage here and there – repeat.
One might suggest that an SI should not be expected to share our notions of what is desirable and what is not. That is certainly true but, again, there is no reason to prefer that possible future over others.
Nobody has the slightest idea of what it is about us that allows us to have free will, given that our minds should obey physical laws just like anything else.
Should an SI end up sharing some of our values, it might behave in such a way that we would be justified in ascribing to it a feeling of indebtedness to us (if we accept one compelling proposal that draws strong connections between emotions and underlying values and if we allow for the possibility that the SI might also, from time to time, demonstrate emotions). Such indebtedness might arise because we, so to speak, were its creator and hence it would never do us harm, in the same way that a poor uneducated parent receives respect from a child that turns out to be very mentally gifted.
Should our SI be driven by more dispassionate concerns, we would have to ask how those might have come about, particularly given that it is likely that we, as its creator, would have bounded its behavior when we programmed it. Such an eventuality would depend, it seems, on building it so that it would or could eventually become endowed with free will.
At the moment, however, nobody has the slightest idea of what it is about us that allows us to have free will, given that our minds should obey physical laws just like anything else. This enormous mystery has to be solved before such an SI can be realized, and we have no idea at the moment how to solve it.
Finally, such overly developed entities might turn out to be physical underachievers with a utility function that is strictly correlated with epistemic gains causing them to simply become bored with us as well as with the desire to do anything other than sitting around cogitating and proving mathematical theorems.
In fact, an SI might evolve into an entity that is merely different but not comparable to ours: a musician who is gifted with perfect pitch need not be superior to one that is not so gifted. We know that planes don’t fly like birds but they nonetheless can fly. But birds can still do certain things that planes can’t such as navigate through very cluttered airspaces. Can either be said to be “better” than the other?
Out of all of these seemingly reasonable and possible futures — and one could go on and on — there does not appear to be any reason to give preference to the doomsday future.
To be fair, there are a few other reasonable possibilities that might come about through human intervention. One is that some evil person – take your favorite James Bond movie villain, for example – would take new AI technology and use it to cause harm to others. However, that just means that we need to do a better job policing our villains. And there are already lots of ways today to cause harm without needing to wait for an SI to do so.
Out of all of these seemingly reasonable and possible futures, there does not appear to be any reason to give preference to the doomsday future.
Another has to do with the emergence of unintended bugs in a program. But this problem is already present in the development of any software. There is also the possibility that we will come to feel threatened by it and seek to destroy it: It may then choose to defend itself, but even then, why would it necessarily choose to destroy the entire human race in order to defend itself?
The message that we should send, then, is that there is no reason to suspect that an evil AI will develop on its own and hence no reason to be given to hyperbole about doomsday scenarios. The emphasis should instead be on the good that can come out of AI: AI products will not need warning labels. People should sit down, take off their shoes, pour themselves a drink and relax, because AI is safe for human consumption and will remain so for the foreseeable future.
Disqus Comments