How many victims is Autopilot allowed to make?
The family of a man that died in a Tesla Autopilot accident in March of this year is suing the company for wrongful death. Jeremy Banner’s Tesla Model 3 didn’t notice a semi crossing the road and crashed into it at 110 km/hour. Banner was instantly killed.
Tesla’s goal is to control the global car market by making the world’s first autonomous car. According to Tesla, Autopilot is an essential step. But is the rush toward automation going too fast, or are deaths unavoidable for autonomous cars to save lives in the longer term?
Preview of an autonomous future
The Autopilot has a fast reflex and is never distracted, but sometimes it fails to see the danger. This happened in four of the five known fatalities since the introduction of Autopilot in 2015. It gives a foretaste of the uncomfortable questions that will be coming at us in the robotic age. Tesla’s CEO, Elon Musk, says that technology saves lives, and there are plenty of Tesla drivers who testify to detected dangers and avoided collisions.
Both sides may be right. That computers have killed a few people who would otherwise have lived, but that they have also saved the lives of many more people. In the years to come, it will be up to society and the authorities to decide whether one is worth the other.
It is no longer an academic question. Musk’s decision to make Autopilot available to as many people as possible amounts to a vast experiment on all the highways of the world.
Tesla says that autonomous technology isn’t reliable enough to allow people to lose focus, not even for a second. They have to keep their hands on the wheel. Because most American states have not yet decided which rules should apply to driverless cars, there is also a legal purpose behind this.
For the government, Autopilot is simply an advanced tool for drivers. In other words: improved cruise control.
Autopilot can’t yet handle things that are not available on highways, such as traffic lights and traffic signs. But in the four years that it has been in use, it has gradually taken on more complex tasks: problem-free entry, avoiding cars that cut the road and navigating from one highway to another.
Computers may have flaws, but they don’t get drunk or tired. Nor do they get angry or check their Instagram feed. It is estimated that 94% of collisions caused by human error could have been avoided. From this point of view, the autonomous car could be just as important to saving lives as penicillin and the smallpox vaccine.
Many companies, including General Motors, Daimler, and Uber, are working hard to develop technology quickly. The biggest competitor, according to many observers, is Waymo LLC. This Google spin-off has been working in the field for more than ten years. None of these companies is ready to sell a driverless car to the public.
Tesla overtakes them all, says Musk to investors, thanks to the more than 500.000 Autopilot-equipped Tesla models that are already on the road. This allows Tesla engineers to collect several terabytes of information from customers. It is used to improve the Autopilot software based on real-world experience.
Even Tesla models without Autopilot do their bit. They ensure that the human driver’s choices can be compared to what the computer would have done. But this type of beta testing could be the cause of lives lost.
Every few weeks, Tesla has a new and improved version of Autopilot around and distributes it to the cars. According to Musk, it won’t be long before the software is good enough for drivers to get rid of their steering wheel.
‘Safer than human drivers’
When an analyst from Morgan Stanley kept nagging about Autopilot’s safety figures, Musk changed the subject. He referred to the dangers of human driving and the ability of technology to solve them.
He compared cars to old-fashioned human-operated elevators. Sometimes they got tired or drunk or something like that, and when they pulled the lever at the wrong time, someone was cut in half,” he said. “That’s why there are no elevator attendants anymore.”
Because it is a matter of life and death, it is not so surprising that Musk sometimes sees defending the chauffeurless cars as a holy crusade. He once said that it would be “morally reprehensible” not to allow Autopilot on the market.
But he and his followers aren’t the only ones who talk about it like that. The first American driver to die while using Autopilot was Joshua Brown, a Navy veteran from Ohio, who, like Banner, ran into a crossing trailer.
After the collision, in 2016, his family issued a statement in which they supported Tesla’s moral considerations as a matter of principle. “No change without risk,” they wrote. “Our family finds comfort and pride in the fact that our son makes such a positive contribution to future safety on the motorway.” Brown had become a martyr for Musk’s good cause.
While Both Brown and Banner were using Autopilot and crashed into a semi, there are differences. Brown drove a Tesla Model S. While also being called ‘autopilot’, the technology in that car was supplied by Mobileye, an Israeli start-up now acquired by Intel. Banner’s Model 3 was equipped with a second-generation version of Autopilot that Tesla developed in-house.
Tesla was exonerated for Brown’s death because NHTSA research showed that the driver wasn’t paying attention to the road. He had at least seven seconds to react before he crashed into the truck.
Difference between computers and people
Computers can go wrong when the driver least expects it. This is because specific tasks that are very difficult for computers are very simple for a human being. Seeing the difference between a good traffic object and a dangerous threat is surprisingly difficult for a computer.
Tesla has not introduced any restrictions that would make Autopilot safer but also less pleasant to use. The company allows motorists to set Autopilot’s cruising speed higher than local speed limits. They can use Autopilot wherever the car detects lane markings, even though the manual states that Autopilot may only be used on highways intended for fast traffic.
With almost 2,5 billion kilometers driven, it should be easy to calculate how reliable Autopilot is. According to Musk, driving with Autopilot is twice as safe as without it. But so far, he hasn’t shown any data to confirm this.
Tesla publishes quarterly figures on accidents with Autopilot. But without data on the circumstances in which they took place, these figures cannot be used, according to experts. An analysis by the insurance industry of data on claims related to Tesla accidents did not lead to clear conclusions.
After Brown’s collision in 2016, the U.S. highway safety authority investigated Autopilot and found no reason to have it recalled. The conclusion stated that Tesla cars with Autopilot collided 40% less often than the ones without. But that was based on a series of questionable calculations.
Tesla had released mileage and collision data of 44.000 cars, but the most important data were incomplete or contradictory for all but 5.700 cars. In that limited group with Autopilot, the percentage of collisions was even higher.
The flaws only came to light when Randy Whitfield, an independent statistical advisor in the American state of Maryland, pointed them out this year. The safety authority says it still supports its original conclusion.
Infallible or safer than humans?
The assessment of Autopilot, and indeed of any entirely autonomous technology, is problematic. Among other things, it is not clear to what extent society tolerates insecurity. Do robots have to be flawless before they are allowed on the road? Or is it enough if they are better than the average human driver?
“People find injuries or deaths caused by machine failure practically unacceptable, “said Gill Pratt, head of research autonomy at Toyota, in a speech in 2017. Machines will have to learn for many years to come. It will take many more miles than have already been recorded in simulated and field tests to achieve the required perfection.
Waiting could cause more deaths
But the paradox is that such a high standard could lead to more deaths than a lower one. In a study for Rand, an institute for policy research, in 2017, researchers Nidhi Kalra and David Groves calculated 500 different conceivable scenarios for the development of the technology.
Most scenarios showed that it would cost tens of thousands of lives to wait for almost perfect driverless cars, instead of starting with cars that are only slightly safer than people.
“People who wait for this to be almost perfect should realize that there are costs involved,” says Kalra, who testified in the American Congress about a policy for driverless cars.
Her insight is based on how cars learn. A programming language is a series of instructions written by a human programmer. This is how most computers work, but not those used by Tesla and other developers of driverless cars.
Recognizing a bicycle and then anticipating where it is going is too complicated to write down in a series of instructions. Instead, programmers use self-learning machines to train their software. Over time, the machine itself comes up with rules to interpret what it sees.
The more experience they have, the smarter these machines become. So that’s a problem when autonomous cars have to stay in the laboratory until they are perfect, says Kalra. “If we want to preserve as many lives as possible,” she says, “we might even have to send autonomous cars out on the road if they’re not as safe as people to speed up their learning process.