Week 3: Robots, Androids and Computers

 

This is a major topic for “hard” science fiction with its focus on scientific exploration and innovation, but it also feeds strongly into popular imagination with the daunting concept of technology out of control. Isaac Asimov’s I, Robot is one of the major influences in this genre. This sets out “The Laws of Robotics” that ensure that robots behave benevolently. However, the idea of goodness is a problem; whose view of what is good for an individual or for humanity do we accept? Does the robot technology make for ultimate happiness, or does it trap humanity into a childlike state, ultimately unable to take responsibility for its own progress? This is explored in many of the later Asimov robot stories. Extending this genre is the android, providing both benevolent (Data, C3P0, R2D2) and malevolent (Alien, The Terminator) characters. As well as technical differences between robots and androids, the androids are often better developed characters, but is this because writers wish to humanise their behaviour rather than investigate technological development? The development of artificial intelligences has also become a popular subject: Hal in 2001; A Space Odyssey, Wintermute of Neuromancer, and the ships of the Culture.

 

Main Texts

Asimovs Laws of Robotics, (short story handouts)

Clarke, C. 2001, A Space Odyssey. Harper Collins,1968 . (A/S office; V0976, V3251, V4070)

Gibson, W. Neuromancer, Penguin, 1983.

 

Related Material

Robots and Empire, Isaac Asimov (A/S office)

The Culture Series of Iain Banks: Consider Phlebas, The Player Of Games, Use Of Weapons, Excess ion, Look To Windward, (Orbit, 2000.)

The robot books of Isaac Asimov Caves of Steel (PZ1 Sci (Asi)), The Naked Sun (PZ1 Sci (Asi)), I, Robot (PH9002, PZ1 Sci (Asi)), The Complete Robot (PH92002), The Robots of Dawn, also A/S office.

The Measure of a Man, The Best of Both Worlds, I, Borg, Descent (ST TNG)

Dreadnought (STV)

The Terminator (A/S office; V3202, V4075), Terminator 2; Judgement Day (A/S office; V3203)

The Terminator, BFI Film Classics, S. French and J. Cameron, 1966 (PN1997 Cam)

Bladerunner (A/S office; V1383)

The Robots of Death, The Android Invasion (Dr. Who)

Gridiron, Philip Kerr, 1995

Westworld (A/S office)

Robocop (A/S office)

Metropolis (A/S office), V2345

Bicentennial Man


 

Discussion Topics/Essay Questions

 

·          Asimov views robots as benevolent, but does not see their existence as beneficial to man. How does he resolve this paradox?

 

·          The drones and minds of the Culture are regarded as “sentient”. Do we really understand what this means? How do we view sentience in relation to robots, androids, computers, cyborgs?

 

·          To what extent do robots like C3P0 or Data reflect human desires to “tame” intelligence?

 

·          Collect examples of “the machines take over/run out of control” stories. What do they have in common, and how do they differ?

 

·           Are Wintermute’s attempts to reinvent itself as a living entity symptomatic of                  our own paranoia about future technologies?

 

·           “The sixth member of the crew cared for none of these things, for it was not human.” (Asimov, 2001: 106). Is Hal really a member of the crew?

 

Quotes.

 

Asimovs Laws of Robotics:

1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

From Handbook of Robotics, 56th Edition, 2058 A.D., as quoted in I, Robot. (1950, Doubleday)

In Robots and Empire (ch. 63, 1985, Doubleday), the "Zeroth Law" is extrapolated, and the other Three Laws modified accordingly

 0. A robot may not injure humanity or, through inaction, allow humanity to come to harm.

Source:http://www.clark.net/pub/edseiler/WWW/asimov_FAQ.html#series13

(2001)

 

“By the time I was in my late teens and already a hardened science fiction reader, I had read many robot stories and found that they fell into two classes.

 

In the first class there was Robot-as-Menace. I don’t have to explain that overmuch. Such stories were a mixture of ‘clank-clank’ and aarghh’ and ‘There are some things man was not meant to know.’ After a while they palled dreadfully and I couldn’t stand them.

 

In the second class (a much smaller one) there was Robot-as-Pathos. In such stories the robots were lovable and were usually put upon by cruel human beings. These charmed me. In late 1938 two such stories hit the stands that particularly impressed me. One was a short story by Eando Biner entitled ‘I, Robot’ about a saintly robot named Adam Link; another was a story by Lester del Rey, entitled ‘Helen O’Loy’ that touched me with its portrayal of a robot that was everything a loyal wife should be.

 

When, therefore, on June 10, 1939 (yes, I do keep meticulous records), I sat down to write my first robot story, there was no question that I fully intended to write a Robot‑as‑Pathos story. I wrote ‘Robbie,’ about a robot nurse and a little girl and love and a prejudiced mother and a weak father and a broken heart and a tearful reunion. (It originally appeared under the title - one I hated - of ‘Strange Playfellow.’)

 

But something odd happened as I wrote this first story. I managed to get a dim vision of a robot as neither Menace nor Pathos. I began to think of robots as industrial products built by matter‑of‑fact engineers. They were built with safety features so they weren’t menaces, and they were fashioned for certain jobs so that no pathos was necessarily involved.

           

As I continued to write robot stories, this notion of carefully engineered industrial robots permeated my stories more and more until the whole character of robot stories in serious printed science fiction changed - not only that of my own stories, but of just about everybody’s.

           

That made me feel good and for many years, decades even, I went about freely admitting that I was ‘the father of the modern robot story.’”

 

 Introduction, The Complete Robot, (Asimov, 1982).

 

“The computer is of course another piece of science fiction which became real and ate the world”

 

From Ursula Le Guin’s introduction to the Norton Book of Science Fiction

 

“In 1950 he (Alan Turing) derived the Turing test for machine intelligence. Essentially this would involve a human being talking, probably via a keyboard, to a machine and to another human. If after a reasonable period of time, he could not tell the difference, then we would be obliged to credit the machine with intelligence. This is a morally significant assumption. It means we do not need evidence of the machine’s “inner mind”, we can work on the basis of only what it does. Normally I credit other people with minds because I believe I have one. But this is a statement of faith. The only truly scientific test is behaviour, and so a machine that passes Turing’s test must be regarded as intelligent and presumably therefore, a moral agent. there is no evidence of inferiority”

 

“A recent episode of Star Trek covered this ground rather elegantly by staging a trial to establish whether the android Data had rights. Significantly, the baddie was the man who said he hadn’t and the goodie - Captain Picard - was the one who said he had. There is an odd reversal of values going on here, in which the devotedly liberal Picard wants to extend morality to machines. Once he would have been Aldous Huxley or G K Chesterton, attacking the mindlessness of technocratic culture. Now he is a starship captain who sees that androids have feelings too.”

 

Bryan Appleyard, Sunday Times, 11 May 1997

 

Hosted by www.Geocities.ws

1