Elon Musk’s Afraid of AI

Elon Musk’s Afraid of AI

Wesley MurchisonFriday,7 November 2014

The Snap:

When Elon Musk speaks, most people listen. The CEO of Tesla Motors, an electric car company, and SpaceX, a private space transportation service company contracted with NASA, and the co-founder of the online payment service PayPal, Musk is the 21st Century personification of the inventor-business magnate on a scale not seen since Thomas Edison. So when he spoke about the dangers of artificial intelligence, it created a few ripples in the blogosphere, for sure.

The Download:

At MIT Aeronautics and Astronautics department’s Centennial Symposium, Musk compared the threat of artificial intelligence to nuclear weapons. Then, as if the image of a mushroom cloud off the horizon wasn’t enough to frighten the audience, he drove the point home with a colorful analogy likening the invention of AI to summoning a demon that cannot be controlled.

“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out,” Musk said.

The audience took the comments good-naturedly, one member even whimsically retorted, “there’d be no HAL 9000 to Mars” in reference to the homicidal computer in the 1968 movie 2001: A Space Odyssey. To which, Musk said HAL 9000 would be a puppy dog by comparison.

In comparison to what, Mr. Musk?

The problem with discussing artificial intelligence is a failure to distinguish between computation and consciousness. Even in the articles that criticized Musk for his alarmism didn’t present the difference between the high theory and the practical development of artificial intelligence clearly. On the one side are the theorists, made up mostly of cognitive psychologists, computer scientists and some philosophers that study the phenomenon of consciousness. On the other side are the developers. The businesses and laboratories nestled in corridors of academia that hire or recruit computer science engineers to develop practical applications of so-called artificial intelligence.

For the former — the thinkers who think about thinking — the latter group has done little to advance artificial intelligence. The creation of IBM’s Watson, Apple’s Siri and Google’s self-driving car are all impressive technological achievements, but they have all accomplished their goals through traditional computing. Essentially, these advanced programs are still instruction-based. These technologies have not shown even an iota of self-awareness or decision-making abilities beyond their original programming. We are just getting better at programming computers to do tasks previously only capable by people. A word equally as in vogue as AI, but a more accurate descriptive, would be “automation.” These systems might steal your job, but they’re not likely to go on a killing spree.

In point of fact, Mr. Musk, anything like HAL 9000 would be momentous. We know that because the rich study of artificial intelligence starts with a little know mathematician named Alan Turing. Fittingly, Mr. Turing is receiving his mainstream debut in the film The Imitation Game. Aside from his famed heroism as Nazi codebreaker, which the film’s plot is centered on, Turing also popularized a test meant to prove a computer’s intelligence equal to that of a human being in his paper “Computer Machinery and Intelligence.” Aptly dubbed the Turing Test, it has been at the center of the discussion of AI and consciousness since its publication.

In the Turing Test, the computer’s objective is to convince a human interviewer that it is intelligent. So far, all test subjects have passed by using misdirection to trick the human questioners into perceiving intelligence where there was none.

The most recent subterfuge of the Turing Test was performed by Eugene Goostman, a chatbot with the personality of a 13-year-old boy from Odessa, Ukraine. Reports claimed the Russian program convinced 33 percent of human questioners that it was a real person, which isn’t an insignificant percentage. Alan Turing predicted that we’d have computers smart enough to past his test 30 percent of the time by the early 21st Century.

In the case of Eugene, the program succeeded by imitating the enthusiasm and unintelligible of a teenage boy to mask its lack of natural intelligence. The majority of human interviewers were quick to recognize that despite the juvenile mimicry, Eugene didn’t respond to their questions with any real conversational give and take.

The initial celebration of Eugene’s success and subsequent reevaluation of its failure in the media is a silver lining to the lack of real progress in artificial intelligence. The Turing Test is getting its cinematic debut, but with a bit of an erotic twist. In the film Ex Machina, the AI under scrutiny isn’t a pubescent chatbot but a sexualized robot. The Turing Test has migrated from philosophical theory to cultural artifact, not unlike how popular philosophical analogues like the disembodied brain in a vat or Plato’s Cave.

The true measure of artificial intelligence, however, is still a mystery. Musk’s premonition is based less on science and more on speculation. His perceived threat is reliant on a lot of assumptions—assumptions that can be invalidated with a far more sensitively tuned thought experiment.

Human intelligence is the product of billions of years of evolution. Furthermore, people are not all equally intelligent nor do we start and end our lives with the same level of intelligence. Since all good predictions are based on sound precedent, it makes sense that the first artificially intelligent machine might only be as smart as a toddler. In addition, it could take years, even decades, if not centuries, to grow first-generation AI into a consciousness as powerful as ours. If true, humanity will have ample time to debate the dangers of artificial intelligence.

Take Action!

Hat Tips:

Washington Post, Time, Image Credit: Flickr



Subscribe to get updates delivered to your inbox