Go to the Globe and Mail homepage

Jump to main navigationJump to main content

Q&A

Artificial intelligence pioneer aims to make computers learn like brains Add to ...

Geoffrey Hinton, a pioneer in artificial intelligence, was awarded the country's top science prize last week, the prestigious Gerhard Herzberg Canada Gold Medal. The prize by the Natural Sciences and Engineering Research Council comes with a guarantee of $1-million in funding over five years. The University of Toronto researcher spoke with Anne McIlroy on his efforts to get computers to learn the way humans do.

Last week, an IBM computer named Watson bested humans on the television program Jeopardy!. Who were you rooting for?

Watson.

Why?

Well, it is an example of artificial intelligence. That's the field I'm in, so it is nice to see progress.

How is Watson different than the kind of artificial intelligence you are working on?

There are two main ways. The first is, we want to do a lot more by learning and a lot less by less by hand programming. Watson was a mixture.

What is the difference?

Suppose you took the whole of Wikipedia, and just fed it to a learning program and didn't tell it anything about English and grammar. We would like something that could take that stream of characters and learn what words mean, learn grammar, and learn what the article is about. That is the dream, a general, all-purpose learning algorithm. We have made enough progress to convince me that it is possible.

In hand programming, someone writes programs to help the computer do specific tasks, like figure out language or what kind of question is being asked?

Yes. If you think of people, we have very little hand programming. We have a brain, we get inputs and after a while we figure it out. By five you have pretty much made sense of the world in terms of understanding language and what objects are and stuff like that.

So you are trying to get computers to learn the way a baby learns?

Basically, we want to understand how the cerebral cortex of the brain learns. The most interesting thing about the cortex is it all looks pretty much the same. So, what we have in there is a general-purpose learning algorithm, something that will take input from the senses and make sense of it. We use the same algorithm for both vision and speech.

How do you know this?

Researchers have rewired ferrets' brains so that the visual input is sent to the bit of the cortex that would normally deal with sound. And this bit learns to do vision. It has been prewired to deal with sensory input, but not necessarily sound. So the reason different regions of the brain do different things is mainly because of what they are connected to.

So you learn about learning by studying animals?

We suspect that all mammals are learning pretty much the same way. People are special in that we have these very late evolutionary adaptations to do symbol processing on top of all that. I think that if you understand a rat properly you would be most of the way to understanding a person.

Artificial intelligence isn't just about logic and reasoning?

Artificial intelligence started out about 50 years ago, and the inspiration for it was logic, the idea that computers could do logic, and that reasoning was all about logic, and if we could do reasoning on computers we would be able to build intelligent machines. But there are some things our brains are better at than computers that don't require logic, like speech recognition and vision. A different branch of artificial intelligence is inspired by how the brain actually works. It is called neural networks, it is people trying to make artificial systems work in a way that is similar to the brain.

At this point, how good are computers at learning compared to us?

They are not nearly as good. Part of it is the hardware, we have many billions of neurons, each of which has thousands of connections. Even now, it is very hard to get computers that have the same amount of processing power and particularly the same access to stored knowledge. The brain can access many gigabytes of knowledge in a tiny fraction of a second. Only the biggest supercomputers can do that kind of thing at present.

What kind of things do you think they will be able to do in the future?

Well, I think in the long run, 50 to 100 years, anything we can do they can do better. But there are dark implications before then. There is going to be an issue of computers that are designed as soldiers, whose purpose is to kill people, and also have some autonomy in doing this. The evil in all that is not in the machines but it is in the people controlling them.

Is there an upside?

If we do begin to understand more about how the brain learns, I think that should be very effective in education. We should be able to build systems that are much better at teaching.

Is there anything that you have gleaned from your research that has changed your own approach to learning?

One thing we have learned is that for lots of kinds of learning, you need to do it lots of times. There is insight learning, when you realize something in one trial. But for lots of other learning, like learning a language, you have to keep at it.

Is it the same for computers?

Probably, yes. For the learning algorithms we know, computers have to go through the information many times to get really good at it.

The interview has been edited and condensed.

 

In the know

Most popular videos »

Highlights

More from The Globe and Mail

Most popular