‘Driver-aids likely to make driving more dangerous than safer’
Advanced driver assistance systems (ADAS) are likely to make driving more dangerous than provide more safety. Drivers are increasingly handing over ‘control of the wheel’ to their car. But they have no idea what the ‘autopilot’ exactly does.
This is the remarkable conclusion of the Dutch Safety Board. It has published the report ‘Who steers?’, on traffic safety and automation in traffic. The findings also raise questions about what the future of the autonomous car will and should look like. And what the legal regulations will be.
Pilot in an aircraft
New cars are already able to take over many tasks like steering, braking, and accelerating. There are no fully autonomous cars that are allowed on public roads yet, but they are coming. As a result of this automation, relatively easy tasks are taken over by the vehicle, and the role of the driver changes. But the systems are far from flawless.
The driver must keep an eye on the process and intervene if necessary. You could compare it to the role of the pilot in an aircraft. This new role requires extra concentration, whereas automation makes the driver less alert, according to the board.
This makes driving easier and more difficult at the same time. And it is not always clear who is steering, the person or the car? This can be fatal when approaching a traffic jam. The driver thinks he can rely on these systems but is on his own if something goes wrong.
Approximately half of all new cars are equipped with driver assistance systems such as an emergency brake system and adaptive cruise control. These systems use observations and decisions made by the car. As a result, cars have become moving computers. The board argues that when cars with this technology are put on public roads, it is often not yet fully developed.
Moreover, in automation, people are often a weak link. They rely too on new technologies without knowledge. Or, drivers do not stick to the instructions in the manual. This is understandable because it often seems as if driver assistance systems do pretty well on their own, according to the board.
A textbook case, mentioned by the Dutch Safety Board report, showed up in the Netherlands in 2016 when a Tesla Model S on Autopilot approached a traffic circle at 80 km/hour. While the driver was (too) confident that the car would handle this, it tore without braking through the center of the circle and hit a pole. The driver was seriously injured.
The report also states that when motorists come to pick up their new car, they get insufficient training about the operation and limitations of driver assistance systems. The same applies to software updates. As a result, a driver does leave tasks to the on-board computer but does not know exactly what it is doing.
According to the board, the car industry, governments, and experts are enthusiastically focusing on the promise of the autonomous car. Still, it can only be safely driven on public roads in the distant future.
Governments are developing new legislation for the future autonomous car. But for the current intermediate form, the driver assistance systems with limited functionality, regulation is very limited.
When new cars are admitted to the roads, the government also has insufficient supervision of the operation of new systems in various circumstances. The police often can’t read the data after an accident. And they don’t always know which cars are equipped with these systems. Learning from accidents, incidents, and user experiences could improve road safety.
But who is liable in the event of an accident, both with current driver assistance systems and with autonomous cars? As long as the on-board computer in vehicles only assists the driver, the person steering remains responsible and liable. Perhaps even punishable. He has to intervene manually if the computer fails.
“But take it one step further,” says André Janssen, a professor in private law, studying liability in new technologies. “As soon as a vehicle becomes completely autonomous, the driver becomes a passenger.”
“When I get into a taxi, the driver is responsible, not his passenger. This can also apply to the autonomous car. In that case, it is not the owner of the Tesla who is liable for damage after a collision, but Tesla boss Elon Musk. But you can also say that Musk only supplied the hardware, and the software developer is therefore liable.”
As far as Janssen is concerned, the main questions for the future lie mainly in liability law and criminal law. In the first case, it concerns the liability of the ‘passenger’, as Janssen outlined, but also the question of who should be insured.
“If I as an owner of a car no longer have any control over my vehicle, does the manufacturer have a certain product liability for which he has to take out insurance?” According to Janssen, the answer to this question has major consequences for the organization of liability insurance. In any case, according to him, it is clear that the premiums will be reduced if the number of accidents decreases as a result of autonomous cars.
It’s also complicated in criminal law. In this respect, someone usually has to be ‘guilty’ because he or she has ‘done’ something. “But the driver of an autonomous car will argue that although he had an accident, he is not guilty because he did not act. The car did that itself.”
That is a train of thought, says Janssen, but I can also argue that the owner caused danger by getting into an autonomous car. “If he had gotten on his bike or walked to the shop, he wouldn’t have created the threat.
Also, in criminal law, it is possible that Tesla will be prosecuted as a manufacturer or its software developer. But all these laws and regulations have yet to be developed, says Janssen. “A lot is happening in this area, but very little is concrete.”
Janssen says that in the meantime, there is only one conclusion: everyone with a steering wheel in their hands is personally responsible, whether or not the car took part in the driving, as was judged by a Dutch court in a case of a Tesla owner.