Mathematicians are generally praised for finding solutions, but sometimes it's even more important to pose the right problem. Such is the case with Stephen Cook, who on Tuesday was named the 2013 winner of the Gerhard Herzberg Canada Gold Medal for Science and Engineering, the federal government's highest honour for non-medical research.
In 1971, Prof. Cook was a newly arrived associate professor at the University of Toronto when he formulated what is now known as the "P versus NP problem," a question that cuts to the heart of what computers can and can't do, with big implications for e-commerce and digital security in an increasingly data-driven world. It is famously listed by the Clay Mathematics Institute as one of a handful of the most important unsolved problems in mathematics – with a million-dollar prize waiting for anyone who can crack it.
Since his discovery, Prof. Cook has trained an entire generation of scientists in the esoteric but crucial field of computational complexity theory, and his contributions have won him the Turing Award, considered the Nobel Prize of computer science. Last month, the U.S.-born researcher who made Canada his home was named to the Order of Ontario. And he is still plumbing the depths of the question that cemented his reputation more than four decades ago.
What does complexity mean to you?
When we're talking about computational complexity, it means how much time or memory it takes to solve a problem. That's what I'm interested in.
How does this question come up in everyday life?
One practical application is in cryptography. We're trying to prove that certain kinds of problems are actually infeasible for solving with computers because they would take billions of years to solve. That's the hope underlying the encryption schemes that companies like PayPal use, for example. The idea is that when you send your credit card information to them, they can decode it, but an eavesdropper can't. But that very much depends on the assumption that there are no shortcuts that a computer can take apart from guessing among all the possible answers, which takes too long. There might be other ways of doing it that we don't know.
Is that where P versus NP comes in?
Sort of. P stands for what we think of as easy problems (to a computer), and NP stands for search problems where it's hard to find the answer because there are many possibilities to search – but if you have the answer you can easily verify it. So if P = NP, it means you've found an easy solution to something that is thought to be hard and encryption goes out the window. The concept turns out to be fruitful in other ways because people run into NP problems in many other areas of computer science, like artificial intelligence, and it's useful to know when you're dealing with a computationally hard problem.
What was your first encounter with a computer?
In1958, I took an undergraduate course in programming at the University of Michigan. They were into computer science early on and they had an IBM 650, which was a vacuum-tube-based computer with a rotating magnetic drum that could hold maybe 2,000 words or memory (about 8.5 kilobytes).
What attracted you to the theoretical side of computer science?
I went to Harvard in 1961 not knowing what I was going to do. My future adviser was a philosopher and logician, but he was interested in computers. I took a course from him and got hooked on the intriguing open question of the limits of computing.
When you first posed the P versus NP problem, did you know you were onto something?
I had thought about it a bit and didn't see how to do it. I didn't realize how important it was going to become nor how extremely difficult it has turned out to be.
Has the way you work changed over the years?
Early in my career, I used to be the single author of my papers. Now I collaborate all the time, very often with students. It's very rewarding.
What do you bring to your collaborations with younger researchers?
The ability to recognize a good idea.