Skip to main content

Why it’s vital to teach computers to think like us

Sponsor Content
ISTOCKPHOTO

The fear of robots and artificial intelligence is widespread. If it’s not popular culture creating images of psychotic or scheming machines − think Ultron in The Avengers or Ava in Ex Machina − it’s the actual news.

According to a steady stream of reports, humans will lose jobs to machines on a vast scale. The latest, coming from the White House in February, said up to 80 per cent of people who are currently making $20 (U.S.) or less an hour are likely to be replaced by robots and AI over the next few decades. Those making over $40 are a little safer, but close to a third will still be made redundant.

Compounding the angst is the fact that machines keep toppling human achievements, as with the victory last month by the AlphaGo computer over Go master Lee Sedol. Experts in the ancient Chinese game didn’t expect Lee to lose, much less so handily; he managed only one win to four defeats.

Put it all together and it certainly seems like the robotic apocalypse is nigh. But ask researchers and scientists who work within robotics and AI if they’re worried, and the answer is no.

Many draw a line between true artificial intelligence − a still remote possibility − and the current crop of smart computers. Present-day AI is better known as “augmented” intelligence than actual “artificial” intelligence, and it isn’t just benevolent − it’s also necessary.

“The risks of not working on AI are far greater,” says Guruduth Banavar, vice-president of cognitive computing at IBM Research. “We’re either going to be incapable of making decisions, or we’re going to make decisions that will make our problems worse.”

At issue is the flood of data being generated every day. The problem was big only a few years ago, but now − with the number of devices connecting to the internet of things growing quickly and all of them contributing their own additional reams of data − the situation is becoming unmanageable.

IBM estimates 2.5 billion gigabytes of data is being created every single day, or the equivalent of 170 newspapers being delivered daily to every man, woman and child on the planet.

On the plus side, the sum total of human knowledge is doubling every eight years or so. On the down side, the human brain’s capacity to handle all that extra information isn’t increasing nearly as quickly.

Tapping into that data in a meaningful way might help us solve some of the world’s most pressing problems, but we can’t do it alone. Simply put: we need help.

“This idea of AI or augmented intelligence being an adversary is completely overblown,” Banavar says. “We determine the direction and goals of these machines. We can make them help us do things that we will not be able to do. This can be applied absolutely in a beneficial sense to humanity.”

Current AI, also sometimes called cognitive computing, is capable of crunching vast amounts of data sets, then delivering useful results to users. Rather than the user having to punch in queries in syntax the computer can understand, the machines are getting better at understanding natural language and responding in the same manner.

IBM’s Watson − the computer that beat human contestants on the game show Jeopardy! in 2011 – is a good example. Watson is now a cognitive-computing system that is being used to solve problems and improve outcomes in a wide range of fields in business, finance, retail, health care and even sports.

IBM's Watson computer system, powered by IBM POWER7, competes against Jeopardy!’s two most successful and celebrated contestants, Ken Jennings and Brad Rutter, in 2011.

A number of third-party companies are using Watson to solve data glut problems across several fields. In Canada, Guelph, Ont.-based LifeLearn, for example, will this summer release Sofie, an assistant for veterinarians, while Silicon Valley-based Ross Intelligence has its own product for law firms.

In both cases, the companies’ principals don’t believe their AI tools will replace jobs, but rather add them. LifeLearn, for one, believes that using AI assistants will allow vet clinics to be more efficient at treating patients, so they’ll be able to accept more of them.

“Rather than automating human process, which is in this case delivering a diagnosis, we’re augmenting human skill and expertise,” says chief executive James Carroll. “As the business grows, they’re going to need more people.”

Machines, past and future

Some experts point out that the potential positives of automation are historically under-reported, misunderstood and underestimated.

In a paper published in the Journal of Economic Perspectives last year, Massachusetts Institute of Technology economics professor David Autor asked a pertinent question in regards to the dark future is being forecast: If automation really does kill jobs, why aren’t we already out of work?

“We determine the direction and goals of these machines. We can make them help us do things that we will not be able to do. This can be applied absolutely in a beneficial sense to humanity.”
Guruduth Banavar, vice-president of cognitive computing at IBM Research

Autor points to a blue-ribbon panel created in 1964 by then-U.S. President Lyndon Johnson to study the growing issue of automated factories. Among its conclusions, the panel found as a basic fact “that technology eliminates jobs, not work.” In other words, every previous step forward in automation has made certain kinds of job redundant, but it has also enabled a host of new ones.

The introduction of passenger cars, for example, displaced equestrian travel and the jobs that supported it in the 1920s, but also gave rise to the roadside motel and fast-food industries.

Similarly, the arrival of automated teller machines in the 1970s actually led to an increase in the number of human tellers, as the lower operating costs allowed banks to open more branches. Rather than just dispensing money, the human tellers evolved into providers of “relationship banking.”

Robots and AI are already having the same effect. Non-routine cognitive jobs − those that require creative thinking, such as health care workers, scientists and engineers − have seen a 57-per-cent increase in the U.S. since the early nineties. Routine cognitive and routine manual jobs – such as accountants and factory workers − have remained flat or decreased slightly, according to the Federal Reserve Bank of St. Louis.

Non-routine manual jobs, such as security guards or janitors, have also seen strong growth of about 44 per cent over that same time frame, largely because the variety of tasks that such jobs demand can’t yet be automated by a single machine.

“Many middle-skill jobs will continue to demand a mixture of tasks from across the skill spectrum,” Autor wrote. “A significant stratum of middle-skill jobs combining specific vocational skills with foundational middle-skills levels of literacy, numeracy, adaptability, problem solving, and common sense will persist in coming decades.”

Nevertheless, a large number of jobs are still likely to be lost as society transitions to a workforce largely made up of non-routine cognitive workers, which could lead to higher unemployment and potential social unrest in the medium term.

The movement toward establishing a basic guaranteed income is picking up steam in a number of countries as a result. Finland is leading the way with a plan to provide every citizen with 800 euros a month. Ontario, Quebec, Alberta and Manitoba are among the Canadian provinces exploring the idea.

Experts believe that the sooner such discussions take place, the better equipped governments will be to deal with the potential upheaval.

“If we had known in the 19th century how the industrial revolution would unfold, we could have prevented a lot of misery,” says Yoshua Bengio, professor of computer science at University of Montreal and the Canada Research Chair in Statistical Learning Algorithms.

“We can either let the law of the jungle deal with it or we can think about it ahead of time and make it happen more smoothly.”



This article was written by Peter Nowak, a technology journalist and author of Humans 3.0 – The Upgrading of the Species.

.


This content was produced by The Globe and Mail's content studio, in consultation with IBM. The Globe's editorial department was not involved in its creation.

Report an error