It was one of the great “eureka” moments in the history of neuroscience, caught on film as it was taking place in London, Ont.
The patient, Scott Routley, was severely brain-injured in 1999 after being hit by a police car. Given only two years to live at that time, he had already far exceeded medical augury, but the doctors who had examined him for more than a dozen years could find no signs of awareness in him.
He was unable to communicate with the outside world, seemingly in a vegetative state. But was the 39-year-old secretly aware? Was a working mind trapped in his immobilized body?
So Adrian Owen, a neuroscientist at the University of Western Ontario’s Brain and Mind Institute, put Mr. Routley in a real-time, functional-magnetic-resonance-imaging (fMRI) scanner. The doctor asked the patient to choose one of two kinds of mental imagery to answer the question of whether he was in pain. In effect, the fMRI said Mr. Routley had answered “no.”
The moment was a double first: the first time in the world any patient of this sort had been able to tell doctors anything medically relevant, and the first time such a patient had been able to communicate instantaneously.
News quickly spread around the world of Dr. Owen’s achievement – the ultimate to date in hacking the human brain. Much more than just finding out how different parts of the brain work, his project reaches inside the hidden folds of the grey matter to figure out what a person wants to say, in the brave hope that lost souls can be found.
His work is part of an accelerating trend among scientists to figure out how to link the human brain – both healthy and injured – directly to machines, no muscles required.
Remarkably, the precise moment of the breakthrough happened to be filmed by BBC television journalists, who had spent two years making a documentary on severely brain-injured patients.
The camera caught the play of emotion on Dr. Owen’s face as he realized that Mr. Routley had answered, giving a glimpse of what was going on in the doctor’s brain as well as the patient’s.
“It was scary,” says Dr. Owen, the Canada Excellence Research Chair in Cognitive Neuroscience and Imaging. But the answer he got was the Holy Grail he had been seeking for 15 years, in research in Canada and Britain. “I was absolutely elated. I was very, very excited that he had managed to tell us something that we couldn’t possibly know in any other way.”
This kind of brain-computer interface represents a massive philosophical shift for humans, who evolved and survived as a species through brute force married to cunning. Even our highly wired world still requires us to type or tap or talk to get a computer to work. What if, in the future, chips and neurons were more intimately connected?
Ken Colwell, a PhD candidate in electrical and computer engineering at Duke University in North Carolina, said people in his field of brain-computer interface dream about a day when people can think words and they show up on a screen.
Already, scientists are playing with the idea and have prototypes for simple tasks. Chinese researchers can help someone fly a toy-sized drone sitting in a wheelchair. http://www.youtube.com/watch?v=JH96O5niEnI&feature=youtu.be A handful of headsets and helmets are commercially available now that can read your brainwaves in order to run phones and computers, play computing games and shift the gears on a bike via smartphone.
And the U.S. military’s Defence Advanced Research Projects Agency (DARPA) has shown that army sentinels can double their ability to spot potential security risks by wearing a brain-reading cap linked to a video camera and computer that identify unconscious human detection of changes in the surroundings. http://www.darpa.mil/NewsEvents/Releases/2012/09/18.aspx
Electrical engineers at Duke, including Mr. Colwell, are working on something far more subtle and complex. They are trying to improve the speed and efficiency of a virtual keyboard on which patients who are paralyzed or otherwise incapacitated can type using only their brains.
It involves hooking electrodes to the patient’s head with a tight cap and gel, showing the patient random images of letters and then tracking the involuntary pulses the brain emits when the patient identifies the letter needed to spell a word.
One of the barriers to the technology is that it can be slow, frustrating and cumbersome, says Leslie Collins, one of the engineers. Her lab is trying to make the equipment easier to use in the home.
But so far, the technology that links the human brain and the machine is in its infancy. Dr. Owen’s body of work shows how far it has already come, and how surprising its promise is.
When Dr. Owen speaks about his research, he becomes so excited that he leans forward in his chair with infectious enthusiasm. He came to Canada two years ago from Cambridge University in England, a kinetic, ginger-haired man and a bit of a wag – his Twitter name is @Comadork.
He began his pioneering work in 1997, driven mainly by the scientific challenge of trying to figure out some practical applications for the emerging field of three-dimensional brain imaging and by the thrill of doing something no one had ever dreamed of.
“I’m really interested in solving problems that seem to be completely unsolvable,” he says.
That year, he popped a 26-year-old woman who was thought to be in a vegetative state following a fever into a scanner “on a whim,” he says.
He showed her pictures of people she knew and other images. The parts of her brain that are linked to facial recognition lit up when she saw people she knew. Eight months after her illness, the landmark paper on her case was accepted by the medical journal The Lancet. Today, she is able to communicate fluently using an alphabet board. http://adrianowen.org/site/Publications_files/Menon-1998-Lancet.pdf
A whole new world of brain research on those in the seemingly vegetative state opened up for Dr. Owen. What if more patients were being misdiagnosed, he thought, and he could help?
“It’s important to get the diagnosis correct,” he says simply.
Back then, and in many hospitals still today, brain-injured patients were diagnosed based on whether they were capable of consistently moving a bit of their body in response to a command.
It’s the classic movie scenario of the medic at the car crash at the side of the road asking a patient for a squeeze of the hand or a flick of the eyelid if the patient can hear. Cue soaring music.
In turn, that physical movement determined whether doctors considered the patient either “vegetative,” meaning no awareness, or “minimally conscious,” meaning some awareness.
In some parts of the world, the two different states are the dividing line between pulling the plug and leaving it in. But what if the mind was aware and responsive, just unable to show that by moving the body?
Dr. Owen reckoned that an active mind might light up by verbal command in the scanner, rather than just by seeing images. So he began developing experiments with both patients and healthy volunteers to see whether he could read minds.
When he started, there was no standardized three-dimensional map of the healthy brain, much less one of a brain that had suffered catastrophic injury. As recently as 2002, he published a paper explaining to colleagues that figuring out which parts of the brain are being activated is “both conceptually and technically more difficult than has been generally assumed.” http://adrianowen.org/site/Publications_files/Brett-2002-NatRevNeurosci.pdf
In 2006, he had his sensational breakthrough paper. http://adrianowen.org/site/Publications_files/Owen_Science_Brevia_2006.pdf It presented the findings of a scan of a 23-year-old woman who had been in a traffic accident and was deemed vegetative. While she was in the scan, he asked her to imagine first one and then a second complex task: playing a game of tennis and then visiting all the rooms in her house.
Each involved an aware mind that could process spoken commands, follow instructions, understand words and use memory. Dr. Owen reasoned that if she could follow the instructions, the parts of her brain she would use to imagine the tasks would light up. He knew from experiments with healthy volunteers which parts of the brain relate to which task, and that they are easy to tell apart.
And bingo. When he asked her to imagine playing tennis, the correct bits of her brain lit up. When he asked her to imagine visiting all the rooms in her house, a different bit lit up. In fact, her scans were indistinguishable from those of the healthy volunteers.
The paper caused a sensation and pushed Dr. Owen to examine more patients in the scanner, testing to see whether he could help them talk using only their imaginations. Eventually, he could ask a patient to visualize visiting the rooms in a house as a way of saying “yes,” and playing tennis to mean “no.” Different parts of their brains would light up reliably in response. Then he tested them on questions he already had answers to. Is your father’s name Henry? Do you have a sister?
The computers would capture the information and he and his team would crunch the numbers and, days later, figure out what the patient had wanted to say. A big step forward came when the technology improved enough that Dr. Owen could read the scans in real time on the computer in front of him as the patient was in the magnetic scanner.
All of that led to the moment the BBC camera captured when, for the first time, Dr. Owen asked a medical question neither he nor anyone except for the patient could answer: He asked Mr. Routley: “Are you in pain?”
Mr. Routley imagined playing tennis. By prearrangement, that meant “no.”
And Dr. Owen could see it on the screen in front of him.
While the finding has received international applause among clinicians, members of a British physicians group developing guidelines to manage people with severe brain injuries have cautioned that it may be too early to use the scanning technique routinely, as encouraging as the findings are.
In an editorial in the British Medical Journal, they said the technique should be used at this point only on patients in a registered national research program.
“Currently, fMRI techniques are not sufficiently developed to form part of the standards assessment battery…” they wrote. http://www.bmj.com/content/345/bmj.e8045?ijkey=f71702f0da232a1e8542db6fad22155068a4bcde&keytype2=tf_ipsecsha&linkType=FULL&journalCode=bmj&resid=345/nov28_1/e8045
Still, Dr. Owen’s techniques of finding voices once lost forever are so revolutionary that they loom large in a handful of legal cases around the world.
One landmark case at the Supreme Court of Canada involves a patient of his, Hassan Rasouli, left brain-injured after an infection. http://www.theglobeandmail.com/news/national/doctors-await-supreme-court-roadmap-for-right-to-live-cases/article6144038/
Mr. Rasouli’s doctors argue that life support ought to be withdrawn and that further treatment could harm him. His family is opposed, saying the patient is aware of his surroundings but can’t say so. Dr. Owen says he currently cannot discuss the case.
He is quick to emphasize that while a scan might find the lost, it can never do the reverse and show that there is no consciousness. Some patients may simply not be able to communicate this way; it may take new, still-unimagined scientific breakthroughs to hook into their brains.
Even if that happens, the day when humans can live by thought alone does not appear imminent.
However, Dr. Owen says so many people are now working on this technology that in another decade, patients may be able to communicate by thinking the actual word “yes,” and maybe even by thinking a sentence, such as “I’m hungry.”
With great luck, over time, they may be able to do that routinely and inexpensively – as long as they have lots of able-bodied people around to bring the equipment to their door.