Artificial Intelligence: No, we’re not there yet.

Recently, I’ve seen an accelerated growth in media hype about Artificial Intelligence. Normally, I’d ignore this kind of hype as science’s version of TMZ, but I’ve had to explain to quite a few people how our recent developments in Machine Learning isn’t the Artificial Intelligence that people make it out to be. It doesn’t help that people like Elon Musk say things like:

“The risk of something seriously dangerous happening is in the five year timeframe” - Elon Musk

It also doesn’t help that pop-technology media like Wired put out articles claiming that “AI has finally been unleashed on the world”.

What is AI? #

We’ve used the phrase Artificial Intelligence colloquially for a lot of things, and there’s nothing wrong with that. When we’re playing Left4Dead, it’s totally appropriate to scream obscenities at the game AI for cheating. Sometimes people even name their Machine Learning systems and call them AIs. What people are referring to here is a weak or narrow AI. A weak AI is a non-sentient computer intelligence that is specifically trained to perform a finite number of tasks, typically one. I work daily on this sort of “weak AI”, as does almost every other ML researcher.

In contrast, a strong AI or an Artificial General Intelligence, is an intelligence that can perform any intellectual task a human can perform. While there is some disagreement about that definition, it’s generally agreed that a strong AI can:

There are quite a few tests that people have devised to determine if a system is truly a strong AI. Perhaps the most famous of these is the Turing test, in which a human asks a series of questions to a human and an AI system. If the human can’t tell which is the computer, then the AI has passed the test. This test has proven incredibly difficult to pass, and no system has yet passed the Turing test uncontroversially. Yet, even this test is a weak test of strong AI. It is extremely likely that the first system to pass the Turing Test is going to be a narrowly-focused weak AI that has been trained specifically to pass the Turing Test. This is exactly what IBM’s Watson did. It was a weak AI specifically trained for the task of answering natural language questions using data collected throughout the web. It will fall just as flat on it’s virtual face trying to pass the Turing Test as a Turing Test passing system would trying to play Jeopardy.

There is even a heated philosophical debate about whether strong AI is even possible. There are many well-respected and knowledgable people on both sides of the debate. Personally, I certainly can’t make up my mind and pick a side, although I am leaning towards the implausibality side right now. But take that with a grain of salt; my mind changes quite a bit on the subject ;). If you’re interested in finding out more about this, I recommend starting with the Chinese Room thought experiment.

So, what’s all the hype about? #

There have been monumental advances in the last 10 years in the field of Machine Learning. The continuing exponential growth in computational power coupled with the powerful new models researchers have developed have contributed greatly to the capabilities of ML. You may have heard of “deep learning”, as it is the latest buzzword for AI models. It’s simply the newest word to describe Neural Networks, a model theorized first in the 1940s. With new training algorithms and greater knowledge about what network architecture is optimal for what task, deeper neural networks can now be created to perform difficult tasks far more effectively.

Watson is a manifestation of that, as is this phenomenal new paper from Google Research that describes a system that can generate descriptions of images. They should certainly be heralded as fantastic achievements. However, it is not an example of strong AI being “unleashed” on the world, as Kevin Kelly from Wired purports in the previously mentioned article.

Neither is it likely that we’ll be seeing a strong AI in the “5-10 year timeframe” as Elon Musk claims (although he might have more insider information about DeepMind). We haven’t even created a system that passes the Turing test yet! As I previously mentioned, plenty of AI researchers don’t even think strong AI is possible. Even those that do would agree that Elon Musk is simply creating hype, and it can be potentially toxic. Yann Lecun, one of the leaders in deep neural network research and head of the Facebook AI Lab, sums it up perfectly:

“Hype is dangerous to AI. Hype killed AI four times in the last five decades. AI Hype must be stopped.” - Yann Lecun

Instead of getting carried away with our expectations and creating fake cautionary tales, let’s marvel in the great achievements in the coming years. I, for one, am looking forward to being driven around drunkenly by my self-driving car.

Discuss this on Hacker News.

 
65
Kudos
 
65
Kudos

Now read this

SAFE term sheets with LaTeX

Recently, I had the fortune of having to draw up a few term sheets for my startup Jobsuitors. This was my first time doing anything of the kind, and I wanted to do it right. I had just read Paul Graham’s post on SAFE, and I was convinced... Continue →