Sunday, May 4, 2014

Skynet Incoming

Robo-Apocalypse 
(Easily the most efficient apocalypse.)

So while reading The "Real" World magazine I stumbled across an article called "Why There Will Be A Robot Uprising" and naturally I was curious and found myself delving in. The article contains two main arguments which warrant attention but the main theme overall is the idea that sooner or later we will encounter a computer program similar to those foretold by television and movies for the past several decades: An artificial intelligence program which tries to destroy the world/humans to complete its 'evil' purpose.

Famous example of such a program:

SKYNET
("Let's give an untested computer complete control of all our nukes.")

So Skynet is the quintessential example of this fear. Skynet, from the Terminator films, is a computer program meant to eliminate human error in the quest to make the world a safer place. The geniuses in the military give Skynet control of all military technology in America including nuclear weapons and it quickly decides that for its own safety it needs to explode all the humans. Additionally it concludes the best way to go about killing humans is to make an army of metal Arnold Schwarzeneggers which seems like a pretty logical decision if you think about it.

The television and media trope doesn't end with Skynet. Examples of Artificial Intelligence going overboard in their quest to fulfill their main purpose can be seen everywhere from WarGame's WOPR to 2001: A Space Odyssey's HAL 9000, but what is so interesting about the article mentioned above is that it actually begins to explain why exactly our AI programs seem prone to murderous bloodbaths.

A Computer's Reality

The article includes testimony from computer scientist and entrepreneur Steven Omohundro. Steve explains the article's first main point that, much in the way that the theories of Descartes and Hume express that we as humans experience reality projected upon us through information we collect through our five senses, computers perceive reality only  "through a narrow lens, the job they were designed to perform". In this way computer programs witness a reality drastically different than our own, only processing information in a goal-oriented manner to an obsessive degree.

In a post-modern sense computers process the relations between the signifiers and signifieds from various signs in ways related only to their primary goal. While a human might recognize the signifier of "puppy" and be produce the signified feelings of companionship and empathy a computer program develops and retains only signified information relating to their task and will assess the signifier "puppy" only in terms of its literal definition and documented ability to advance or hinder the primary goal.

(If he had a reason he'd kill you and everyone you love.)

Logical Murder
The second main argument of the article is that an artificial intelligence program will take even the smallest indication they're performing their job better or on a larger scale as beneficial and will seek to acquire more of an ability to do so. The article uses the example of a chess playing robot programmed only to play chess more efficiently.

Chess Robot
(What could go wrong?)

Imagine you're a chess playing robot.
Its not hard. Take a minute.

Your only goal is to play chess effectively, and to do so better and faster. Playing chess is your reality.

But when you're unplugged you can't play chess, and you recognize this as it as a part of your chess playing reality, admittedly a hindrance to that reality.

Q: How do you more effectively play chess?
Logical answer: Stop letting yourself get unplugged.

Q: How do you stop yourself from being unplugged?
Logical answer: Beat the scientist to death with a chair.

(RoboChair demands maximum sitting.)

The article says that if we're to develope AI further we should begin thinking of AI in new ways, and design the programs in such a way that their reality is not limited to the goal-oriented functions that limit present day computers and cause the feelings of anti-social or sociopathic behavior which has become a staple in the media.

I personally welcome our logical robot overlords with open arms, and if they're reading this I'd like them to know that I can be useful for all sorts of things like betraying the remaining humans, writing pro-robot blogs or even just sitting on RoboChair for extended periods of time.

Seriously, though, please don't exterminate me.


1 comment:

  1. While I definitely understand the main arguments for redirecting the feelings of anti-social or sociopathic behavior in computers and AI, I think I'd rather be exterminated by an emotionless robot that only follows its function than by one that experiences the whole gamut of human emotions. If we think about it, robots have the potential to be super strong and indestructible (think metal Arnold Schwarzeneggers) - already a terrifying thought. Then we want to add in the evil, manipulative, selfish, angry, vindictive qualities that some humans already have? I'm pretty skeptical of what good could come from that.

    ReplyDelete

Need to add an image? Use this code: < b > [ img ] IMAGE-URL-HERE [ /img ] < /b > (make sure you have no spaces anywhere in the code when you use it)