Down the Rabbit Hole: The Origins of Artificial Intelligence

The term “artificial intelligence” is being thrown around a lot lately. But what is artificial intelligence, really? With A.I.’s like Siri, Cortana, and more, the world is approaching what is known as The Singularity, the era of the machine. Though these A.I. are nothing like James Cameron’s Skynet in the 1984 sci-fi smash hit The Terminator, it is imperative to understand where artificial intelligence came from in order to fully comprehend where it is going next.

Many filmmakers and authors were afraid of the rise of artificial intelligence and attempted to capture this fear in many notable and influential works of fiction that are still relevant today. That being said, not every artificial intelligence system wants world domination, unlike how A.I. has often been portrayed in these works.

In this article, we will attempt to dive down the rabbit hole and discover the true meaning of “artificial intelligence” by reflecting on the past views of artificial intelligence through various works of film and literature, aligning them with real A.I. innovations that occurred around the same time. This exploration, we hope, will encourage a better understanding of the present in order to predict the future of A.I.

 

I, Robot. Isaac Asimov publishes a collection of nine science fiction short stories on December 2nd, 1950. These stories deal with Asimov’s interest in robotics, morality, and not only the interaction between man and machine, but also between man and himself. Within this collection, what is now known as the Three Laws of Robotics are devised:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

These three laws hold ground until a conference held at Dartmouth College in 1956. At the conference, MIT cognitive scientists and many others like them coin the term, “artificial intelligence,” for the first time.

In 1968, Stanley Kubrick directs, writes, and produces what would become one of the most influential science fiction films to date, 2001: A Space Odyssey. The film deals with themes of man vs. machine, existentialism, and artificial intelligence. It follows a group of astronauts and scientists on a spaceship to Jupiter, which is controlled by its computer, HAL 9000. Over the course of the film, Hal uses his sentience to murder the crew members that fear and plan on disconnecting him, breaking all Three Laws of Robotics.

AI Winter. Criticisms soon halt progress on artificial intelligence out of fear, cutting interest and government spending. The years 1974 through 1980 become known as the “A.I. Winter,” with another from 1987 to 1993. The second winter is a result of the market collapse of early versions for the home computer and the lack of government spending on research.

In 1984, James Cameron releases The Terminator, starring Arnold Schwarzenegger as the Terminator. In the distant future, an AI known as Skynet becomes sentient and initiates a nuclear holocaust, sending the Terminator back in time to kill Sarah Connor and her unborn son John, who will later become the face of the Skynet rebellion.

It is unknown how Hollywood’s depiction of artificial intelligence impacted its development over time or the general public’s feelings on artificial intelligence. Yet, seeing how AI always manages to become the antagonist, it is possible that filmmakers wanted to reinforce the fear of artificial intelligence within the cultural and historical context of the films themselves.

Chess and…Jeopardy?! On May 11, 1997, IBM’s Deep Blue becomes the first artificial intelligence to beat a human as it defeats Russian grandmaster Garry Kasparov. Development for Deep Blue started in 1985 at Carnegie Mellon University, but the development team would later be hired by IBM. Kasparov demands a rematch, but IBM quickly declines and disassembles Deep Blue altogether. Deep Blue pioneered computer chess research and chess programs and would be the spark of successful (and non harmful) artificial intelligence.

In 2011, IBM’s artificial intelligence “Watson” beats longtime champions Brad Rutter and Ken Jennings on the show trivia show Jeopardy. Now known as the Watson Discovery Advisor, Watson ingests millions upon millions of scientific papers and other important information. According to Tanya Lewis of livescience, “Several research institutions are already using the new Watson system. For example, Baylor College of Medicine in Houston used the technology to identify proteins that modify protein p53, which is involved in preventing cancer. With about 70,000 existing research papers on this protein, a task that would have taken years to complete may take only weeks with Watson, IBM said.” See? They’re not all bad.

What Next? Who knows? On October 14th, 2011, Apple introduced its artificial intelligence system known as Siri for the new iPhone 4S, and Siri continues to be a staple feature in iPhones today. Artificial intelligence seems to slowly be crawling into the light each day as more and more companies are turning their heads to computers and almost-sentient personal assistants. But what lies next for artificial intelligence? Next week, we will take a look at artificial intelligence’s modernity in order to figure out what to anticipate in our future. As Arnold so eloquently said, “I’ll be back.”

Post A Comment

Your email address will not be published. Required fields are marked *