AI and Law

While researching for a project, I came across an interesting report from the European Union of 2017 examining the liability of robots. Among other things, it examines how liability may shift when robots become more autonomous.

Considers that, in principle, once the parties bearing the ultimate responsibility have been identified, their liability should be proportional to the actual level of instructions given to the robot and of its degree of autonomy, so that the greater a robot's learning capability or autonomy, and the longer a robot's training, the greater the responsibility of its trainer should be; notes, in particular, that skills resulting from “training” given to a robot should be not confused with skills depending strictly on its self-learning abilities when seeking to identify the person to whom the robot's harmful behaviour is actually attributable; notes that at least at the present stage the responsibility must lie with a human and not a robot;

When a defect in a car kills a man, the manufacturer is responsible, but the responsibility falls on the driver who hits a man while driving carelessly.

Self-driving cars driving carelessly fall in between the two cases. To me, a self-driving car that causes an incident sounds like a defect in the software, but what if, globally, the rate of incidents per distance becomes lower for that AI compared to humans?

Currently, the number of victims is in the range of 1.35 million per year. To fight that figure, imagine a global law stipulating that every time a driver causes a deadly incident, that driver shall not be allowed to drive anymore. Now imagine that all cars suddenly become self-driving cars and that the AI cuts by half the number of incidents. Killing “only” 67.5 million people a year would be a massive improvement, but can we really say that it’s ok?

It’s an open question, and I don’t have an answer. We need a solution (the report suggests insurances), but I think that it’s unreasonable to design liability for AI with human concepts because there are massive differences that even a trivial mental experiment can highlight.

Tags: AI