Skip to main content

John Lund

ASIMO, Honda's humanoid robotic superstar, has no eyes. Indeed, he has no face. Instead, he views the world through a black, blank monitor screen.

Why this restraint by his creator? One of the scientists who designed him confessed that, with a humanoid face, ASIMO looked too human: hence, menacing.

With a computer head atop a four-foot, 114-pound plastic body, he looked charming, endearing and cute - hence, safe. Other creators, though, will not hesitate. Humanoid robots with soft, textured faces, indistinguishable from human faces (though perhaps less flawed) already exist - although only in the lab, so far.

ASIMO has introduced millions of people to humanoid robots in the last eight years - in his early cross-country tours (2002-2005), in his regular public appearances (Disneyland) and in his special performances (as a conductor of the Detroit Symphony Orchestra, and as a guest at Windsor Castle).

There are, after all, 100 of him. He walks, runs, climbs stairs, shakes hands, faces people when they speak to him, recognizes people he has met before, responds to his name - and expresses assent or dissent either with a nod of his head or with a verbal response.

ASIMO's most brilliant performance yet, however, may well be his starring role in Honda Canada's ASIMO's Journey, a television and movie-theatre commercial (from the Montreal- and Toronto-based Bos ad agency) in which the charismatic robot travels across the country to the inspired lyrics of Getting to Know You (and "all about you"), the Rodgers and Hammerstein hit from The King and I.

In the first frames, a wistful ASIMO and his human mother look out the front window of a modest house and watch children at play; in the next frames, the children call on him to join them. In other sequences, ASIMO experiences a cookout on the beach, watches a farmer tap maple trees in a sugar bush, visits an Alberta ranch (where horses run free) and has his photograph taken, as any tourist might, beside a totem pole on the West Coast.

Oddly, although ASIMO is explicitly sexless, people confidently refer to him as male. (What was it about cars and ships, earlier species of machinery, that induced people to refer to them as if they were female?) ASIMO's name is gender-neutral, derived from Advanced Step in Innovative MObility (and not, as some people assume, from the name of the science fiction writer Isaac Asimov).

Sony's AIBO, a robotic dog, acquired his name in a similar manner (Artificial Intelligence roBOt). Introduced in 1999, AIBO struck humans as charming, endearing and cute as well. Sony, alas, retired him in 2006; he lives now at the Smithsonian Institution in Washington.

Isaac Asimov produced history's first draft of robotic legislation when he wrote his Three Laws of Robotics in 1942:

The First Law: A robot may not injure a human being or, through inaction, allow a human to come to harm.

The Second Law: A robot must obey any orders given to it by human beings, except when such orders would conflict with the First Law.

The Third Law: A robot must protect his own existence as long as such protection does not conflict with the First Law or the Second Law.

But Mr. Asimov's three basic laws will not be adequate - as, earlier, Moses's 10 moral laws were not. Adam necessitates Eve. ASIMO necessitates ASIMA. You can't have one without the other - not, at any rate, without the risk of going to court. In what ways, aside from mere physical form, will ASIMA differ from ASIMO? Presumably, people will recognize the physical distinctions.

But what other attributes will she possess? Will different skills determine her gender? Will different IQ? Will different personality? This question is more urgent than you might think. Researchers at the University of California, Berkeley, have demonstrated a robot that can fold towels. Precisely. Will this robot be HIS or HERS?

More importantly, what happens when a male human being finds a female humanoid robot exceptionally charming, endearing and cute? Or, similarly, what happens when a female human being finds a male humanoid robot exceedingly charming, endearing and cute? Will we have grounds for alienation of affection? For divorce? Writing in the journal Computer Law and Security Review, Anna Russell of the University of San Diego asserts that the cyborg (humanoid robot) can no longer be regarded merely as a literary device in science-fiction stories. For all practical purposes, cyborgs exist. And it is inevitable that complicated legal issues will arise, she says, as soon as traditional human "love lines" are blurred, when humans become intimate with machines.

Assuming that society permits physical relationships between humans and machines, Ms. Russell says, cyborgs will necessarily acquire inalienable rights.

"If a self-aware, superintelligent, thinking, feeling humanoid is developed," she writes, "the legal system will be hard-pressed to distinguish this creature legally from human actors on grounds not stemming from a religious or moral prejudice." In other words, Mr. Asimov's principles won't suffice - and, except among people with strict religious convictions, Moses won't help much either.

Computer-driven gizmos are already commonplace in many industries and, indeed, many professions. Intelligent machines that look like people, work like people, love like people and, lamentably, fight like people are much closer than most humans think.

In their 2009 book Moral Machines: Teaching Robots Right from Wrong, a Colin Allen, a philosopher of science at Indiana University, and Wendell Wallach, a bioethicist at Yale University, say that Japanese scientists have developed androids that are indistinguishable from humans, that American scientists have developed "service robots" that can care for the elderly and the disabled, that Japanese and American scientists have developed computers that "read" human emotions, and that South Korean scientists have developed armed robots capable of patrolling the country's border with North Korea.

It is South Korean government policy to have a robot in every home by 2020 - either for utilitarian or industrial reasons. (Samsung, a South Korean company, is a pioneer in advanced robotics.)

Professors Allen and Wallach suggest that cyborgs (or, to use the terminology they prefer, artificial moral agents) will have a lot to handle apart from love and marriage. They cite a deadly incident in South Africa in 2007 when a heavily armed "semi-autonomous robot" deployed by the South African armed forces killed nine soldiers and wounded 14 others. In this case, they say, you can blame humans, either for faulty software or faulty hardware.

With any technological advance that makes the robotic soldier more fully autonomous, however, "the potential for bigger disasters will increase."

At some point, they argue, the decision to fire will be made solely by the robot soldier - who will base his or her decision strictly on the external stimulus that he or she detects and on his or her own independent assessment.

"Yes, the machines are coming," Profs. Allen and Wallach write. "Yes, they will have unintended consequences on human lives, not all of them good." The process will bring revolutionary change but it will be incremental. "Arthur C. Clarke's HAL remains a fiction," they say. "The doomsday scenario of The Terminator will not be realized by 2029. It is not quite as safe, however, to bet against The Matrix by 2199." In the meantime, a great deal of consideration must be given to what human society delegates to robots and what it reserves exclusively for itself.

So far, most of the heavy thinking on these questions - and especially on human-cyborg conflict - has emerged from comic books. There has been very little speculative analysis of the strictly political consequences. If robots are going to be used to replace people in countries with declining populations, such as Japan, it is likely that the political priorities of these countries would be reflected in the robots they make - as South Korea's autonomous border robots already attest.

Will these robots "get along" comfortably in other countries? Or will nationalistic robots incite resentments - in the same way that immigrants sometimes do? In most countries, people acquire citizenship by their place of birth. Will robots acquire citizenship in the same way?

Will robots work be permitted to campaign on behalf of political candidates? Will they have a vote? How will traditional ideology influence the decision?

Will robots become a mark of wealth and class - with all of the covetous politics that wealth and class generate? It would now cost $1-million to clone ASIMO. Will humans insist on medicare coverage for ailing or obsolescent robots?

Can robots be required to keep secrets? From family members? From everyone else? Can robots share secrets with other robots? On the other hand, assuming robots were directed never to lie, could the family survive? Could society? Probably not. Think of one of last year's most depressing films: The Invention of Lying.

The American Bar Association runs a permanent expert committee on artificial intelligence and robotics. In April, the committee published a number of special reports - on the state of AI in telemedicine, on advanced wiretapping and on military drones. In the May issue of The ABA Journal, the association will publish "Robot Rules" - an analysis of pending U.S. legislation that will, for the first time, assign liability for the actions of robots. We live now in the future tense.