Skip to main content
opinion

When is a full self-driving car not fully self-driving? When it’s a Tesla. That’s no joke; lives are on the line.

On a recent call with investors, Tesla chief executive Elon Musk said that although it’s not for sure, some customers may get access to the “full self-driving” feature in their cars later this year.

‘Full self-driving’ (FSD) is currently a $7,900 optional extra on new Tesla vehicles. As for what FSD actually does, well, that’s where things get absurd – and potentially dangerous.

Experts predict that fully automated (Level 5) cars, the sort that can drive themselves anywhere in any weather, are still decades away. So right away, any claim of “full self-driving” cars in 2019 should sound too good to be true.

Still, a reasonable person might assume that paying $7,900 for a product called full self-driving would get you a car that, you know, fully drives itself. However, that’s not the case with Tesla.

In Tesla’s world, a full self-driving car is one in which the (human) driver is still responsible for supervising the car and intervening if/when the car makes a mistake. The company would consider such a system “feature-complete,” meaning it has delivered on the full self-driving product it sold to customers.

“Yeah, feature-complete, I mean, the car is able to drive from one's house to work, most likely without interventions. So it will still be supervised, but it will be able to drive,” Musk said on the earnings call.

Given how frequently Tesla’s new Smart Summon feature has made mistakes, it doesn’t inspire much confidence. (Look up all the videos on YouTube of Tesla’s Smart Summon system failing to navigate parking lots.)

The crucial bit in Musk’s statement is “supervision.” Having to supervise the driving means you can’t check your phone, you certainly can’t sleep and you really can’t be doing anything other than keeping your eyes on the road.

What Musk described in the earnings call is different from how Tesla advertises FSD. “All you will need to do is get in and tell your car where to go,” the company claims on its website. However, just below that is the caveat that, “The future use of these features without supervision is dependent on achieving reliability far in excess of human drivers […] as well as regulatory approval, which may take longer in some jurisdictions.”

It’s not clear who Tesla imagines would be liable in the full self-driving mode Musk described, but the requirement for driver supervision implies the human driver is ultimately responsible – as is the case with Tesla’s current Autopilot system, and every other car on the road.

What makes this whole thing smell off is that Tesla has a big incentive to switch on the FSD system, as the LA Times noted. The company has taken in around US$600-million from customers who paid for the optional feature. When Tesla ships the FSD system, the company can recognize that revenue.

Misleading advertising is one thing. Giving a product a misleading name is another. But these things become a safety concern when driving is involved.

The danger is that the rest of us are going to have to share the road with these FSD cars that aren’t fully self-driving. Will customers fully grasp the system’s limitations? If they do, will they be able to reliably supervise it for hours on end?

“Experimental studies have shown that drivers can lose sight of what automated systems are doing, fail to notice when something goes wrong, and have trouble taking control again,” according to several studies cited by the U.S. Insurance Institute for Highway Safety.

Despite our best intentions, humans have proven to be pretty bad at supervising automated cars, as was the case with the Uber test vehicle which struck and killed a pedestrian last year, or the Tesla drivers whose lapses in supervision of the Autopilot system led to fatal crashes.

Of course, humans have also proven to be pretty bad at driving. Clearly, making cars safer is an urgent goal. Driver assists such as automatic emergency braking, which act as a last-resort backup, have proven to be beneficial. But trying to completely split the task of driving between humans (who supervise) and computers (which physically control the steering and braking) is probably not the path to increased safety. It’s like a doubles tennis match in which neither partner can see what the other is doing, and communication is limited to beeps and warning lights.

Until companies take full liability for the actions of their robo-cars, and can prove that they are safe, we should leave full self-driving to sci-fi movies and keep our own two human hands on the steering wheel.

Stay on top of all our Drive stories. We have a Drive newsletter covering car reviews, innovative new cars and the ups and downs of everyday driving. Sign up today.