The electric BMW i3 glides silently forward when summoned by a smartphone. It stops precisely in front of me, as a taxi might at curbside, and displays a welcome message on a rear-mounted touchscreen. One more touch, and we’re off on a fully autonomous drive.
This is BMW’s vision of a driverless future and, while there are practical considerations in terms of sensor sensitivity, software programming and GPS mapping, it’s probably inevitable.
“Autonomy is just an engineering problem,” says Simon Euringer, vice-president of BMW’s Group Technology Office in California. “BMW is not racing to be the first, but we will eventually get there. We need to be talking more about how we will interact with autonomous cars, and how they will change our society.”
Euringer speaks of real estate freed up by the elimination of the need for parking lots, and of BMW’s first steps to handle seemingly simple tasks such as asking your autonomous car to make a stopover while en route. It’s all pretty optimistic stuff, a future where human mobility is safe, efficient and accessible.
There is, however, one rather large fly in the ointment. When the i3 pulled up in front of me in the basement of Vancouver’s Convention Centre during this month’s TED Conference and flashed its greeting, it had my name spelled incorrectly. A simple error, but one perhaps symbolic of the greater issues that face autonomy in the near future.
“Human beings have a low tolerance for failures of technology,” Euringer continues. “We have learned to accept human failings, but we don’t accept when technology doesn’t work.”
Thus, while there are hundreds of collisions every day on our roads, many of them fatal, the ones receiving the most attention at present are the pedestrian killed in Arizona by one of Uber’s fleet of prototype autonomous Volvos in March, and the fatal highway crash of a Tesla Model X near Mountain View, Calif. In both cases last month, technology failed and the humans at the wheel didn’t respond in time to prevent disaster.
Blame will be decided by the courts, but both cases should be a warning to everyone who uses the roads.
First, in the case of the Uber crash, relying on corporations to put the safety of the public as a top priority in the race to autonomy is a mistake. The safety driver at the wheel of the Volvo appears to have been distracted at the time of the collision, but she would also have been required to be at the wheel for a monotonous eight- to 10-hour shift, and other safety drivers have spoken of a company culture that encouraged drivers to skip breaks and rack up as much mileage as possible.
If that wasn’t enough of a warning, consider the Takata airbag disaster, Volkswagen’s diesel emissions cheating or GM’s ignition failures. Companies won’t put safety first if they think they can cut corners and profit.
The Tesla collision is perhaps even more worrying than the Uber failure. Tesla places the blame squarely on the driver of the Model X, saying in a release, “Tesla is extremely clear that Autopilot requires the driver to be alert and have hands on the wheel.”
So why is Tesla calling it “Autopilot”? The company’s website even features a picture of a driver with their hands in their lap.
But the two biggest issues are drivers overtrusting vehicle safety systems and increasing driver disengagement.
On driver disengagement, BMW displayed a graphic showing a gradual transition of driving responsibility from human to machine. However, Euringer was a little more blunt. “No one likes a take-over situation. In any hand-over scenario, there is the risk of failure.”
That risk of failure is the great looming danger of increasing driving assists. The less input a driver has to put into driving, the more likely they are to become bored, distracted or drowsy. When a system fails, or disengages suddenly, reaction times will vary from person to person. Further, in both fatalities mentioned above, it appears as though the systems gave no warning prior to impact, meaning that the driver would have had to be paying full attention with their eyes on the road.
We know humans are terrible drivers over all. One need only pull up collision records from past years to see a litany of offences: driving under the influence of drugs or alcohol, failure to use a seatbelt, distracted driving. Add in cars that can mostly stay in their lanes and slow for traffic, and we’re more and more likely to check our text messages at speed when we think we can get away with it. The systems may not be perfect, but risky behaviour on the road is a human certainty.
Already, videos are popping up of drivers trying to fool the torque sensors of their Teslas into thinking there are hands on the wheel, including, bizarrely, wedging an orange into the steering wheel. Earlier this year, an alleged drunk driver was found passed out behind the wheel of his Tesla, stopped on the Bay Bridge in San Francisco. He reportedly attempted to assure officers that his car was in Autopilot mode – and was, of course, arrested anyway.
““There will come a day when vehicles are fully autonomous, but between here and there is a profit motive,” says Alex Roy, founder of the Human Driving Association, and an autonomy expert for The Drive website.
Roy points out that simple safety solutions such as better driving instruction, increased licensing standards and the mandating of basic collision-mitigation systems have all been ignored in the headlong rush to an autonomous future. Instead of the slow trickle out of Series Automation, where cars function without driver input under perfect conditions, he advocates for Parallel Automation, where the responsibilities of the driver are enhanced by technology.
The idea is not much different than anti-lock brakes or all-wheel-drive, working to help keep us on the road. The paradox comes with the human tendency to overtrust these systems and start following too close on the highway, or heading out into a snowstorm without proper winter tires.
Engineers such as Euringer see a well-ordered world where personal transportation makes perfect sense. Realists such Roy ask who profits in such a future, and who will cut corners in the race to get there. Regulators and consumers should proceed with caution.