Skip to main content

Your credit-card company continuously keeps an eye on your transactions. It flags any anomaly in spending behaviour the instant it fields each authorization request. A phone call asks if your car has been lost or stolen even before you are aware it's gone.

You go for an ultrasound of your unborn child. Its heartbeat pattern is analyzed at the same time as its form is imaged. You are reassured that everything is normal. Had it not been, you would have been alerted to a problem long before your baby showed any troubling symptoms.

Your husband's brainwaves are analyzed by a computerized system that models his brain. Long before any seizure, the system predicts the possibility of large-scale multicell electrical bursts that characterize petit mal epilepsy.

All these advanced tools, some still in the research and development stage and some already entering the commercial world, are examples of a new scientific discipline called artificial neural networks.

The fetal heart monitor was developed by a team at the University of Vienna. Epilepsy prediction by brainwave analysis is the brainchild of scientists at the KFKI Research Institute for Particle and Nuclear Physics in Budapest. The charge-card monitor developed by Visa International is already in daily use.

Both in business and in pure research, artificial neural networks are a big deal. They're a blend of neurology and systems engineering. ANN experts apply machine design to understand how the human brain works, then apply that biological knowledge to design new, brain-like machines.

Because of the mix of science and technology, artificial neural networks are highly theoretical and highly practical at the same time.

Of two people conversing at last summer's international ANN conference in Montreal, for example, one might have been an experimental physiologist using MRI brain scans to see how people retrieve memories in complex situations and the other might have been a bank actuary trying to predict how many of her high-risk mortgagees were likely to default.

As a concept -- designing machines that mimic the brain in structure, function or both -- neural networks go back to the early 1940s. This was the time of great conceptual advances by such giants of the fledgling discipline of cybernetics as Alan Turing and John von Neumann.

But while their theories led directly to the computers we use today, the work of the early neural-network researchers was a bust. In 1940, bioscience was too primitive to make sense of the tens of billions of cells that make up the human brain.

Only half a century later have we started to see how the brain works its data-processing miracles. And while many mysteries remain, we have now learned enough to mimic the brain in certain key ways in both structure and function.

It turns out that the machines Dr. von Neumann and friends imagined, and that we use today, are nothing like the brain. Von Neumann machines are slow to think, but lightning-fast to act. The brain is just the reverse.

A modern processor can complete a "flop," a basic unit of data manipulation, in 30 trillionths of a second, the time it takes a ray of light to cross your fingernail. A brain flop is a million times slower than that of a desktop personal computer -- and yet the brain solves complex problems faster than the best von Neumann hardware.

The key is less how the components are designed than how they go together. The brain isn't much of a sprinter, but it's a fabulous architect.

The von Neumann model uses linear, algorithmic computation. Algorithms are formal instructions that tell a machine what facts to look for, where to find them and what to do with them when it finds them. As any programmer will tell you, usually with a deep sigh and a look heavenward, writing algorithms is a finicky business.

Algorithmic computers are like five-year-old children. Here's a Sample algorithmic dialogue, from dad to kindergarten student:

Put on your socks.

Where are they?

In your room.

I can't find them.

Did you look in your sock drawer?

No.

Look in your sock drawer.

Okay.

What do you see?

Socks.

Put them on.

Put them on what?

Once the parent-programmer has assembled instructions that are impossible for his or her machine-child to misunderstand, algorithmic computers can effect them with great rapidity. In fact, most of the power of a von Neumann computer is due to its speed.

But there are some problems that brute rapidity can't solve well, and that's where neural networks come in. As with the brain they emulate, slow and steady wins the race.

Both brains and artificial neural networks avoid the straight-ahead, hup-two-three march of algorithmic computers. Computer scientists call this "linear architecture." Instead, neural networks (whether ANNs or the human brain) perform a vast number of computations all at once -- an approach called "parallel architecture."

For example, to sum a column of 1,000 10-digit numbers, a computer adds one number to the next with great speed, quickly working its way through to the end. An artificial neural network might break the sum into 500 two-number additions, each of which it then adds. Given an ANN's slower flops, that latter process should take far longer -- except that the 500 simpler sums are all done at once. Parallel processing is slower per flop, but faster in aggregate.

The upshot is anything but abstract: It has direct applications in the real world.

A linear-algorithmic computer, even a powerful one, may take days to recognize a photo of a human face, or classify it as unknown. The brain of a four-year-old child effortlessly completes the same task in a 10th of a second.

This simple, long-known fact drives computer scientists crazy. "Something is out of whack here," says Thomas Theis, vice-president, research, at IBM Laboratories in Armonk, N.Y. "Under current modes of storing data, to write a file specifying even a simple living organism such as a paramecium would create a file that was unimaginably huge. Yet nature does it effortlessly, and in less space than a pinpoint. . . . Yes, we can store and manipulate data using digital electronics. But we're just lousy at it."

The key word in Dr. Theis's hair-tearing lament may be "algorithm." ANNs, like natural neural networks, do not need formal instructions. No top-down, detail-obsessed programmer needs tell them what to do. ANNs process inputs, then rejig themselves to understand them. Like human babies, they suss out the world by and for themselves.

The reason for this awesome ability may be that artificial neural networks copy not only the functions of natural neural networks, but their structures as well. And those structures depend on an intricate interconnection.

"You can view organic neurons as unreliable components if you like," says Simon Haykin, a professor of electrical engineering at McMaster University in Hamilton. "And individually, they are rather messy things. But that doesn't matter, because there are so many of them. More to the point, there are so many possible paths that link them. The possible number of connections among brain neurons is staggering."

A mere 10 neurons have 3,628,800 possible interconnections, Dr Haykin says -- 10 x 9 x 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1. Mathematicians express this as [10!]-- pronounced "10-factorial."

The possible links among one brain's 20 billion neurons is [20000000000!]You calculate it by multiplying 20,000,000,000 by 19,999,999,999, and so on, down to 1. The result, Dr. Haykin thinks, may exceed the number of subatomic particles in the universe.

Forget the starry cosmos. An infant's brain may be more complex than everything else in creation. That mind-boggling complexity defines a neural network. While a linear computer gets its power from its components' speed, the power of an artificial neural network comes from how its zillions of individual neurons are knit together.

That one key understanding has let ANN engineers build simple electronic devices called synthetic neurons. Like the brain cell on which it is modelled, a synthetic neuron accepts electrical signals from many of its fellows. It assesses the relative strength of these inputs; then it outputs its own signal to other synthetic neurons. The result is an ability to analyze reality that to your average programmer is nothing less than spooky.

Again, this self-configuring property mimics the human brain. A good cook cannot say why a cake ingredient seems past its prime. It feels stale. No woodsman can explain how he senses his quarry: It's a matter of intuition -- a fast, subtle, unconscious skill that linear logic cannot duplicate. Intuitive people know, that's all.

The Montreal conference discussed dozens of recent successes for artificial neural networks:

An ANN client-assessor developed in the United States is more adept at sniffing out deadbeat borrowers than any algorithmic system. The ANN looks at age, marital status and current debt load, and predicts defaulters with deadly accuracy.

In combination with mainstream computers, ANNs help U.S. airlines allocate passenger spaces. They adapt to swings in seat demand and availability that would boggle an algorithmic system.

Elsewhere in the world, ANNs: Recognize individual human voices; recover telecommunications data lost to faulty software; translate Chinese ideograms; help to find undersea mines; recognize individual handwriting; and diagnose hard-to-spot diseases such as hepatitis A and B.

Neural networks can do more than intuit conclusions where linear-algorithmic systems see only a hopeless muddle. Like human brains, they can learn. An engineering technique called back-propagation lets artificial neurons, like natural ones, acknowledge that a signal has reached them -- much as an e-mail message may ask for a receipt. This check lets neural networks recognize their mistakes and avoid them in future, permitting them to be taught.

Problems do arise with artificial neural networks. When their decisions are correct, they seem magical, but when they're wrong, they are simply absurd.

One paper at the Montreal conference discussed an ANN created to identify armoured vehicles in battle. Like many infant ANNs, it was first taught by being fed hundreds of examples. Engineers showed it various tanks, each labelled friend or foe. Yet the system failed its tests miserably, classifying photos of sports cars and even bicycles as foes. The engineers were stumped until they realized that every enemy photo fed into the ANN was, by chance, brightly lit. Incorrectly but understandably, the network inferred that any device in full sunlight was hostile.

Despite these growing pains, the consensus at the Montreal conference was that artificial neural networks have immense promise. Carver Mead of the California Institute of Technology summed things up: "The nervous systems of animals are able to accomplish feats that cannot be approached by our most powerful computing systems," he said. "Given the exponential increase in computing power over the last 45 years, our inability to rival the common housefly has become downright embarrassing. What is going on?"

Neural networks, Dr. Mead said, may diminish humanity's embarrassment -- letting us do what nature already does so well.

Christos Stergiou from Britain's Stirling University added: "The most exciting aspect of neural networks is the possibility that some day conscious networks might be produced."

If this proves true, then neural networks may one day give us the soul of the robot -- the long-sought ghost in the machine.

William Illsey Atkinson is a frequent contributor to The Globe and Mail on science and technology.

Interact with The Globe