Faculty of Advocates warns on automated vehicle reasoning processes

The advanced technology used in automated vehicles could hinder accident investigations involving those vehicles, according to a warning from the Faculty of Advocates – the industry body representing lawyers in Scotland.

In its response to a joint consultation on self-driving vehicle laws by the Scottish Law Commission and the Law Commission of England and Wales, the Faculty said that while self-driving vehicles are being developed to ‘think’ for themselves, their reasoning processes are likely to be impenetrable.

As a result, it may be impossible to determine what led to the behaviour that caused an automated vehicle accident.

The Faculty said that as research seeks to develop ‘neural networks’ for automated driving systems, which make their own autonomous decisions, the reasoning behind those decisions could be unclear.



“It is a feature of such systems that their internal ‘reasoning’ processes tend to be opaque and impenetrable (what is known as the ‘black box’ phenomenon) – the programmers may be unable to explain how they achieve their outcomes,” said the Faculty.

“If the operation of the system causes an accident, it might be perfectly possible to determine the cause through examination of the source code of a conventional system (there might be a clearly identifiable bug in the system, or one of the algorithms might be obviously flawed) but where a neural network is involved, it may be literally impossible to determine what produced the behaviour which caused the accident.”

Because of this, said the Faculty, automated vehicles should not yet be programmed to behave in certain ways, such as mounting the pavement to allow emergency vehicles to pass, or exceeding the speed limit in certain circumstances.


Send me Driverless Weekly