Human-Robot Interaction: A Digital Shift in Law and its Narratives? Legal Blame, Criminal Law, and Procedure

Human-Robot Interaction: Legal Blame, Criminal Law, and Procedure Comparative Law Project

Sabine Gless & Helena Whalen-Bridge

Human-Robot Interactions: Legal Blame, Criminal Law and Procedure Comparative Law Project

Robots are on the rise. Increasingly, we cooperate with standalone machines that help us with our chores, such as the automated vacuum cleaner or lawn mower, as well as with programs that are integrated into everyday objects, like driver-assistance systems in modern cars. These robots share the responsibility of a task and, at the same time, gather and process information in order to carry out actions autonomously, based on processing large amounts of data and machine-learning techniques, including Artificial Intelligence (AI).

Trust in human-robot interaction – On the validity of the trust principle in negligence liability in Switzerland (PhD Project)

Janneke de Snaijer

Trust in human-robot interaction – On the validity of the trust principle in negligence liability in Switzerland (PhD Project)

Smart safety devices in modern cars as well as intelligent tools in surgery provide poignant examples of human-robot collaboration that foreshadow a number of specific effects upon penal law. Within the domain of substantive criminal law, the demarcation of a negligent act from an intentional crime could change entirely if the mens rea could be inferred from a person’s response to (ro)bot advice. For instance, if a drowsiness detection system alerts a driver to take a break but the driver continues, eventually causing an accident, courts may be inclined to infer negligence or even intent as a result of the driver disregarding the advice. Nonetheless, criminal procedure might not grant an adequate defense to human drivers.

Defense Rights against Robot Testimony (PhD Project)

Jannik Di Gallo

Defense Rights against Robot Testimony (PhD Project)

In criminal proceedings, human testimony – most notably a defendant’s submissions –, will have less significance, while machine-based evidence will gain traction. This includes evaluative data from machine systems that collect information via sensors and can make their own evaluation of a situation based on this collected information. The operations of such systems are very complex. This leads to various problems that need to be clarified if evaluative data is to be used as evidence in a criminal trial, especially since the defense must be allowed to thoroughly examine such “robot evidence”. For example, if a drowsiness detection system assesses a driver as sleepy, how can the defense challenge this machine-evidence if it is presented against the driver in court? Finally, verdicts will have to explain how machine-generated data was evaluated against human statements to justify an acquittal or conviction.

News

UNESCO Webinar Series on AI and the Rule of Law

Webinar 1: The Next Frontier: IP in the Era of Generative AI  

  • Thursday 25 May 2023 via Zoom from 15:00 to 16:30 CEST 

Webinar 2: The Admissibility Challenge: AI-Generated Evidence in the Courtroom 

  • Thursday 15 June via Zoom from 15:00 to 16:30 CEST  

Webinar 3: Virtual Reality and Augmented Reality in the Courtroom  

  • Thursday 6 July via Zoom from 15:00 to 16:30 CEST  

 

Artificial Justice: The Quandary of AI in the Courtroom

by Maura R. Grossman, Sabine Gless, Mireille Hildebrandt and Paul W. Grimm (2021-2022)

Panel discussion: Link

 

Council of Europe Webinars

https://www.coe.int/en/web/artificial-intelligence/webinars

AI&Law Webinar Series #09 - Facial Recognition v. Criminal Justice: https://youtu.be/vaTdtAYn5lM


Events

Keine Einträge verfügbar.
Beispielbild.

Universität Basel | Juristische Fakultät
(Ro)bot-Human Interaction
Peter Merian-Weg 8
Postfach | 4002 Basel | Schweiz
Tel +41 61 207 28 73