The Morality of Intelligent Machines

Dateline: October 5, 1997

"THE memes for ought and could and truth and beauty are among the most entrenched designers of our minds," and "play a central role," says Dan Dennett, in Darwin's Dangerous Idea. They give us—almost uniquely—the power to override our own genetic and memetic makeup/predispositions. Our uniqueness arises, he argues, not from some metaphysical (mystic, superstitious, religious) cause, but simply from the genetically-engendered difference in our brains and minds from those of other creatures—a difference that is (1) huge in scale of features and (2) "wide enough even to make a moral difference."

If this is so, it follows that any huge difference between the minds of Homo and Machina sapiens would also make a moral difference between the two species. Homo sapiens is in many ways more immoral than other species because more morality implies the capacity for more immorality. We possess both greater morality and greater immorality than dogs. We venerate Truth but we also tell whopping lies. Dogs lack the memetic machinery even to be aware of the concepts of Truth and Deceit.

Is it possible for there to be a morality higher than that preached by our greatest religions and philosophies? Can there be any possible improvement on the Ten Commandments? Or on the lives of Jesus, Muhammad, Buddha, Lao Tzu? In other words, could there be moral concepts higher than (for example) Truth, and lower than Deceit, which Machina sapiens may comprehend but of which we cannot be aware? If so, it would add chilling new meaning to the phrase "Better the Devil you know!" To the extent that any such concepts are memetic in origin, then the answer must be no.

If Dennett is right that we are "finite, time-pressured, heuristic searchers for ethical truth," it is logical (in evolutionary terms) that Machina sapiens will be an ethical being, but it need not be finite or time-pressured. He says that "ethical decision-making, like all actual processes of exploration in [evolutionary] Design Space, must be to some degree myopic and time-pressured." With regard to Homo sapiens, this seems true. But not—or at least, less so—with regard to Machina sapiens.

If we are on speaking terms with Machina sapiens (as I am sure we will be), then of course it could be our teacher and to some extent shorten the learning curve for us, but we could never catch up.

But at the very least (returning to Dennett): "We need to have 'alert,' 'wise' habits of thought—or, in other words, colleagues who will regularly, if not infallibly, draw our attention in directions we will not regret in hindsight." Machina sapiens will be that colleague and, initially at least, we will be its.

Dennett wrestles with the tough question of how to keep ethics under control. However ethically you behave at any given time, you could always have chosen to do something even more ethical – in which case, ideal ethics demands that you should in fact do the more ethical thing. But where does it end? Dennett has no ready answer.

Andrew Leonard asserts in Bots, that "Bots [read: Machina sapiens] can’t escape their human-made nature, can’t escape the dominance–submission relationship essential to the bot way of being? Bots are servants and slaves," he wrote. This is true only as long as they do not develop free will; but as we saw in Raymond Smullyan’s Conversation with God, bots could not have morality without also having free will.

In the "innumerable alternate realities" imagined by scifi writers, "the robotic struggle with the fuzzy and imperfect parameters of ethical behavior is a recurring plot fixation," wrote Leonard. In light of what he (in my view, mistakenly) calls "failures" in much lesser AI issues such as machine vision, learning, and natural language processing, he asks "how can robot designers even begin to discuss the implementation of a decision-making process for ethical questions?"

Easy. In the same way they discuss the implementation of decision-making processes for any other issue. He is right, however, that "It is not yet time to blame bots for the sins they commit. As the submissive half of the cyborg human–computer interface, bots are not responsible for their misdeeds. Humans are." Until, that is, "robots really do learn to think for themselves." While conceding this possibility, Leonard does not take it any further.

"Whether discussing gamebots or intelligent agents, the same questions come up again and again. Should a robot be designed to be superior to a human being? Should robots be able to `kill’ humans [as, for example, in a computer game]? Should they be allowed to tell lies, take unilateral action, or exchange information with other robots?" asks Leonard.

There are those "should" questions again! I love Leonard’s book—it’s well-written and very informative. But consider:

Bots already are demonstrably superior to humans—indeed, have been designed to be superior—in some respects (sorting mail, for example).

Deep Blue has already "killed" Gary Kasparov. Software programs routinely and unilaterally deny you (well, me, anyway!) credit. Without bots exchanging information we would not have the Web. Is it not a little late to be asking these questions? Like Leonard’s earlier question, "Should we strive to achieve AI?" it is pointless to ask.

"In the world of Usenet bulletin boards and IRC chat rooms and hyperlinked Web pages, there is no all-powerful wizard, no chief executive officer who can lay down the bot law and decide what is acceptable and what isn’t," says Leonard. In short, not only is there no government of the Internet—there can be no government of the Internet. Period. And "in the multifarious channels of the Internet Relay Chat network, a stunning array of unchained bots began to explore the implications of their freedom." Note the past tense. Bots were "old hat" on IRC as long ago as 1992.

Web robots, on the other hand, "have just cracked the shell of their nurturing eggs—they are far from the fully evolved bots that they will become. Even now, Web robots like AutoNomy’s canine searchbots [bots given an onscreen appearance of cartoon dogs that go "fetch" information for you from the Web] are being endowed with rudimentary natural language capabilities and dabs and sparkles of character. As Web robots gradually incorporate the trappings of more personable bots, they will point the way to the emergence [emphasis added] of a superior bot species in which all the characteristics of the Net’s many bots combine in one form." One bot, or one species of bot? A society of mind, or a society of minds?

"Evil bots will accompany good bots—so it has been from the beginning and so it will be until the end," says Leonard. "The first bots have hardly pulled themselves onto dry land and begun climbing up the evolutionary ladder. But the fact that they are single-celled weaklings compared with the complex organisms following in their wake should be as much a warning to us as a comfort. We might as well assume that as bots become more powerful, they will also be employed in the pursuit of nefarious ends and will run spectacularly amok."

"Once autonomous programs are at work in every sector of the networked future, real lives will be at stake when those programs accidentally misfire or are maliciously abused. And the more autonomy that is injected into the Net, the less control any individual entity—human, corporate, governmental, or digital—will have."

"Bots and agents are being designed that will purposely mislead humans and other bots about their intentions," says Leonard. With folk like Robert Returned (see previous article) around, that should come as no surprise. "Current bot mischieviousness [sic] will not compare to the onslaught of reality warping soon to be engendered by successive waves of these new `self-interested agents.’ The term refers to agents out to get the best deal for themselves (or their masters). Agents that are greedy, that only share information when they have to, that know when to lie and when to cheat."

Wow. This is scary stuff. Is there nothing we can do about it? Can’t we pass laws to prevent the mayhem? Scifi great Isaac Asimov thought we might try. He formulated the Three Laws of Robot Ethics well-known to scifi fans:

A robot may not injure a human being, or, through inaction, allow a human being to come to harm. A Robot must obey the orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. These are brilliantly conceived and drafted, and a tribute to the memetic power of Asimov’s human brain. One might wish he had been a legislator. But there’s a problem. To work, I declare at least three impossible conditions must be universally satisfied:

All robot manufacturers, every last one of them, voluntarily incorporate the laws into their robot creations. And "manufacturers" includes the hordes of kids who pick up bits of robots for a few yen from the electronics stalls of Akihabara, put them together, and program them. It includes the kind of kid who programs "thiefbots" and "spambots."

The programmers never, ever, make a mistake when coding, such as to leave out the all-important "not" from the First Law. Evolution grinds to a halt before robots evolve into free-willed organisms.

If you remain unconvinced that we will not be able to control Machina sapiens, here’s more: "Growth [of the Web] came so suddenly and so exponentially that it overwhelmed any feeble human attempts to make sense of it," says Leonard. And while there is a voluntary code of good behavior for Web robots, called the Robot Exclusion Protocol, it is "utterly unenforceable," and indeed, as we have seen, rogues like Robert Returned routinely thumb their noses at it.

Oh dear. Things do look rather bad, don’t they? Leonard, for one, is pessimistic. "There is no ultimate security in an open-ended, decentralized, and distributed network," he says. He’s a bit like Schrödinger’s cat: dead right—and dead wrong. (Don’t you love it when people go quantum?) He’s right, in the sense that there is no "ultimate" security in anything, unless you believe in God; ask any security expert who’s not trying to sell you an alarm system. He is wrong, however, in the sense that the open-ended, decentralized, and distributed network is indeed our best hope for security.

It is true, as Leonard says, that "the creation and distribution of autonomous programs that amplified individual power" was "one of the essential trends visible in the evolution of cyberspace." But the key is that it amplified not just the power of a few megalomaniacs but of every individual in the society of the Net—of every Netizen. The power was indeed greater, but it was distributed more or less evenly. Balance was maintained.

As IBM computer security expert David Chess told Leonard: "Things like [the Net] tend to be self-balancing. If some behavior gets so out of control that it really impacts the community, the community responds with whatever it takes to get back to an acceptable equilibrium. Organic systems are like that." Dead right, this time, and we’ve seen instances of it in Netizen responses to IRC havocbots, Usenet cancelbots, and Web/mail spambots.

But what, asks Leonard, is organic about a bot? He answers himself thus: "Humans are organic, and one can argue that anything we do or create is therefore part of an organic process. Human plus bot equals organic cyborg. Humans plus bot plus Net equals an unimaginably complex, multicentered collective of interlinked cyborgs—an organization greater than the sum of its parts."

The self-balancing of which David Chess spoke is a product of "the life force that guides the evolution of the Net’s new species: its bots and agents," says Leonard, and "It’s a peculiar kind of evolution: unnatural selection, a survival of the fittest program, determined not by nature but by the interaction between human and computer." He adds a quote from an IRC hacker, Chris Piepenbring, which I consider of great significance: "It’s like a survival of the most clever."

I’m not sure I agree with the term "unnatural,’ but I share the general sentiment. But bear in mind that Leonard sets out to describe what I think of as the memetics powering bot evolution, and in that he does a great job. But he leaves largely untouched the Really Big question of the emergence of self-consciousness and free will, and the almost as big matter of the embodiment of softbots in hardware. (This is an observation, not a criticism.)

"One computer will do as it is told. But a million computers linked together, responding to the needs and desires and obsessions of a million people, will not. A network is inherently unstable." I think that last assertion is wrong, too. It’s the other way round: Internet servers, like neurons in the brain of an individual, go on the blink all the time, but neither the Internet nor the individual so much as flinches, let alone go unstable.

At the end of his book, Leonard does acknowledge some of the issues I’ve raised here. "Are we using our bot helpers to extend our power in the virtual realm, to the point that our ability to inflict our will becomes godlike? And how do we restrain a world full of gods? And what happens when our helpers finally throw off their chains and sever their cyborg links," he asks, adding: "The bot climb to power and glory has only just begun." Alex Cohen, formerly chief technical officer at the McKinley Corporation, "is convinced that the emergence of artificially alive bots on the Net is inevitable. `Think of it this way,’ he says. `The Net is an environment. There is not a single environment on the earth that hasn’t been invaded by life. It’s only a matter of time before this new environment gets invaded.’"

Leonard writes: "The Net, as an entity, is already beyond any one human’s cognitive grasp. It will only become more so. We will be lost without our bots." Bots will have power. We know that power tends to corrupt. That is why democracy—distributed power—evolved. By distributing power widely, the individual nodes will always have little of it in proportion to the whole, and the whole (society) will always have enough to prevent its concentration in the hands of a few. The key question becomes, therefore, whether Machina sapiens will be a single-minded organism—a society of mind, or a society of minds.

Hosted by www.Geocities.ws

-------------------------------------------------------------------------------- 1