Human-Robot Interactions: Legal Blame, Criminal Law and Procedure Comparative Law Project (Sabine Gless & Helena Whalen-Bridge)
Prof. Dr. iur. Sabine Gless
Professorin für Strafrecht und Strafprozessrecht (Professur Gless)
Helena Whalen-Bridge
Associate Professor (Educator)
Academic Fellow
Because human tasks are now shared with robots, if harm is caused in these interactions, legal systems must face the challenge of allocating legal culpability and establishing guilt beyond traditional human-centric ways. Driving automation illustrates this challenge. Recent investigations into accidents involving driver-assistance systems and human drivers indicate a new understanding of joint responsibility. The 2018 fatality in Tempe, Arizona, pointed to possible pitfalls: No charges were pressed against Uber, but the human back-up driver had to stand trial, although the accident report suggests that the automated systems did not perform entirely properly, either. How does or should the legal system explain when a robot can be blamed for a particular decision or action, and how should it allocate responsibility between a robot and a human?
Substantive Law: Changes in the Elements of Guilt When Robots Guide Humans
Allocating legal culpability and establishing guilt in human-robot interaction is put to test in criminal law. As criminal law was designed by humans to create order amongst ourselves, its fundamental concepts, such as the notion of free will or the concept of a willful action, are uniquely human. The increase in ambient intelligent environments arguably requires a change in the traditional allocation of blame. Robots or their suppliers may have partially caused the harm but they do not fit traditional criminal concepts, which ultimately may not be suitable for determining liability and apportioning blame. Understanding the imputation of guilt in criminal law will help to understand the apportionment of guilt in all forms of legal responsibility and offer an important point of comparison to other areas of law. For example, requiring a robot or its supplier to pay compensation for a tort claim offers a potential solution, and strict liability is a viable option, but the issues in criminal law are not so easily resolved.
Procedural Law: Challenges for Fact-Finding Using Machine Evidence and Unsolved Privacy Issues
It is not yet clear if and how the data generated during human-robot interaction can be used as evidence in legal fact-finding. The evidentiary issues arising from utilizing information generated by the robot portion of a human-robot cooperation are manifold. For instance, if during automated driving, a human driver’s face is monitored for drowsiness, the activation of a drowsiness alert, or the driving assistant’s assessment of a driver’s conduct, could all be potentially relevant evidence. However, a drowsiness detection system can be imprecise or ambiguous – it may include biased algorithms or be trained on incorrectly labeled data.
Comparing adversarial and inquisitorial systems, the project explores how disparate systems approach the issues differently, and procedural issues come into clear focus via these comparisons. Adversarial proceedings have the advantage of partisan vetting, which gives both parties the opportunity to challenge consumer products that become a sort of witness against the human. But inquisitorial systems may have stronger mechanisms to introduce expert evidence recorded outside the courtroom.
Narrative: Changing Legal Narratives When Harm Is Caused in a Human-Robot Interaction
Human participation in, and causation of, events animate the legal narratives that underlie legal responsibility, with the human-centric origin of criminal justice systems shaping the narratives of criminal verdicts. However, with the increase in human-robot interactions, the human-centric approach is at risk of losing its place as the primary guiding principle. Thus a risk is emerging that criminal investigations may even be triggered by machine error. Until now, harm caused during such collaborations has not received particular attention in criminal law, despite the fact that robots are, in various ways, already gathering and processing information, drawing conclusions on behalf of humans and cooperating with them – or even acting autonomously as their agents. This project explores how different developments have repercussions on discussions of the law generally, the narratives of criminal verdicts and the staging of a criminal case.