Go to the Globe and Mail homepage

Jump to main navigationJump to main content

Q & A Geoffrey Hinton

Why talking is so much tougher than math Add to ...

To some, it's the most exciting development computer science has seen in years. To others, it's a science fiction experiment with an unknown outcome.

Regardless, there is no denying that artificial intelligence, the field dedicated to making machines that can behave like humans, has begun to reshape the way governments, businesses and consumers work and live.

One of the experts at the forefront of developments in artificial intelligence is Geoffrey Hinton, a computer science professor at the University of Toronto, who has received numerous prestigious international awards, including the 2011 Herzberg Canada Gold Medal for Science and Engineering and honours for his contributions to the field.

Dr. Hinton has made major contributions to machine learning, a dominant branch of artificial intelligence that uses algorithms, or lists of instructions, to allow computers to look for patterns in large amounts of information. Dr. Hinton believes that in order for AI to succeed, scientists must focus on finding ways to enable computers to learn.

In a recent interview, Dr. Hinton discussed where AI is going, the opportunities that lie ahead - and whether there is any truth to fears that making intelligent machines could backfire on the world.

What are some of the big ways AI is currently being used or applied?

There's basically a split in AI. There are kind of two routes in which you can approach trying to understanding [artificial]intelligence. Early AI was mainly based on logic. You're trying to make computers that reason like people. The second route is from biology: You're trying to make computers that can perceive and act and adapt like animals. I'm in the second camp, I'm trying to approach it from the standpoint that if we want some really complicated kind of intelligence, it's going to have to be learned. We're not going to be able to hand program it.

Essentially in the early 70s the debate was won by the systems that didn't do learning, the systems that used these complicated internal hand programmed representations. They did reasoning but not learning and since then everything has changed. Currently, the debate is being won by those who emphasize learning.

Let's talk about that world a little bit. In that area, what are the hopes, ideals for what can be accomplished in terms of learning?

In the long run, we want to be able to make things that are as smart as people and as adaptable as people. You can look out at the real world and figure out what's going on. Almost all your ability to do that is probably learned when you're very little. So essentially, it's turned out to be harder to [replicate]what a three-year-old child can do than to do what a grand master at chess can do.

Is that just a function of the fact there are all these complicated processes of the human mind?

Yes, and the fact that we've got a very large brain in which there's a lot going on. We're very good of at things like vision and speech recognition and controlling our bodies. Relatively speaking, we're very bad at abstract symbolic things like playing chess or doing arithmetic. And because we're bad at those things and you need many years of education to get good at them, they were regarded as the height of intelligence. The big surprise over the last half century, it continues to be surprising, is how hard it is to do vision and even high quality speech recognition. Humans are still much better than computers at recognizing speech.

Why are some of those things so hard? What are the major challenges there?

Part of it is that we now believe that there's just a massive amount of information processing [that]has to be done in order to be good at vision or speech recognition. There's also a massive amount of knowledge you have to have. You have to have some way of acquiring this massive amount of knowledge. This is an example from the 1970s, I think: 'The city councilmen refused to give the demonstrators a licence because they feared violence.'

Now, when I say that to you, the 'they' in 'because they feared violence,' you don't think that refers to the demonstrators, yet the demonstrators was the thing closest. It could be you had a whole bunch of radical city councilmen who said we've got these wimpy demonstrators, we're not giving them a licence, they're no good, they're not real demonstrators. But that's not what you interpret it to mean. That means you have to know a whole lot about politics and in particular American politics or the politics of demonstrations in order to know what the 'they' refers to. Then need to translate it into some other language you might have to know all of that stuff and be able to apply it in this context. So just an understanding an apparently not too complicated sentence, everything you might know is involved, a vast amount of knowledge.

How difficult is it, or how realistic is it, to imagine a day when we can have these learning-based systems, when this area really evolves?

It will definitely come and in fact learning is behind a lot of what companies like Google do now so it can learn how to rank pages. In other words, Google looks at what people click on when they're presented with some choice and they learn what it is people really meant when they made that query by looking to see which one they click on, and then in future they'll make that one come first. They learn that from very large numbers of people. That's one place machine learning is used.

Another place is in collaborative filtering, that is when you buy something from Amazon, it recommends other things or when you use Netflix, it recommends other movies based on the movies you've rated. Those are sort of everyday places now where machine learning is very important. Whether Bing manages to compete with Google depends very much on how good their machine learning is compared with how good Google's machine learning is.

You see these movies that come out and they always paint these tales, the horror stories, of machines taking over the world. What would be the benefits of developments in AI and what are some of the concerns as well?

It all depends on the political systems in which they operate. Machines can do things cheaper and better. We're very used to that in banking, for example. ATM machines are better than tellers if you want a simple transaction. They're faster, they're less trouble, they're more reliable so they put tellers out of work. Now whether that's a good or a bad thing depends on what happens next. Potentially it ought to be a good thing in a society that takes care of people. Making everything more efficient should make everybody happier.

But if instead it makes a few bankers extremely rich and puts lots of poor people out of work, that's not good so that's really a political issue about how it's used. The more worrying thing along those lines is producing robots that can get around in the world. They're still very limited at how well they can do that but the U.S. defence department wants to replace lots of soldiers with robots. You could see why, I mean, they would like to be able to invade places with no American dead. You can see the beginnings of it, so these predator drones, they're currently controlled by people.

It's going to be irresistible, you see, if they can make robotic soldiers. Initially, they'll tell you they'll always have a human in the loop and you know that's not true because in order to make them effective, you have to make, you need them to be able to make split second decisions and it's going to just be too tempting to make them more and more autonomous so they can make the split second decisions.

There's so many potential bad things that can happen - is there any way it can be prevented?

I think the best defence is an open political system with good journalism. If journalists were as effective as Wikileaks we'd all be fine. The media controls what people see and hear. From my perspective, the main lesson is we're now getting vast amounts of data and much more computer power and that allows us to learn a huge amount of stuff that previously had to be hand-programmed. As we get better and better ways of doing learning, computers will take over more and more but they've still got a long way to go until they're as good as us.

Follow on Twitter: @carlyweeks

 

In the know

Most popular video »

Highlights

More from The Globe and Mail

Most Popular Stories