Technical Change

The first pillar, "Technical Change," includes the following topics:

The research projects covering the emerging field of "Law & Robots" cover various problems resulting from the employment of intelligent agents in all areas of life.

Legal scholars initially responded rather slowly to the challenges of digitalization and the new generation of "intelligent agents". However, the significant legal risks attached to the production or use of robots and the possible hampering of further advancement if these problems are unresolved triggers more and more legal research in the area.

With the emergence of a vast variety of robots - ranging from software agents in the internet to robot cars - the public became aware of more problems: Intelligent agents automatically generate and save all data, enabling them to learn from identifying and deciphering patterns, but also simultaneously screening the activity of any user and/or ancillary persons. It is yet unclear who ultimately has the authority to determine the use of data. Moreover, the embedment of smart technology in all areas of life carries the risk of unauthorized persons operating our environment to their advantage - in an internet of things hacking poses a problem much bigger than in a traditional environment. And more legal problems are starting to emerge as smart devices interact increasingly with humans.

Researchers at the Basel's law faculty have analyzed such problems under the umbrella of NRP 75 Big Datan. For further information see: Big Data Dialog and NFB 75 Big Data.

Overall, the robot's ability to gather a huge amount of data, process it and search for patterns to which it reacts, and to learn subsequently by analyzing the response with other information opens doors to innovation. But it also raises basic questions with regard to liability, privacy and safety, and fundamentally requires a legal positioning of robots which are increasingly observed in relation to humans, yet their mode of operation is purely mechanical. Even in legal debate, some scholars advocate a place for robots somewhere between man and machine. Such reasoning appears preposterous at first sight. It indicates, however, the depth of problems, even when only looking at the question of liability. The complex structure needed to make a robot function has triggered a lively debate about legal responsibility, should a smart device - such as a robot car - cause damage, by, for example, running into a group of children. Who is to be held liable? The machine (as an entity with assets), the man behind the machine (the producer or user) or nobody (as we all have to carry the risk of innovation)? The lack of a clear answer as to who is to be held accountable when the robot induces a wrong hampers innovation. This became obvious after the car industry presented semi-automated cars, but held off with autonomous cars. As important as problems of liability are, as has been explained, they are only one of the legal problems arising with the rise of the robots - others are: The robot's inevitable capacity to automatically accumulate data on people's (and other robots') actions and our understanding of the needs for privacy, the robots (and simultaneously the associated humans) vulnerability to hackers, the likely social changes to come in near future when man interacts with efficient machines lacking a need for endearment or self-determination, and acting without an ethical code. All these issues give rise to new questions, which are partly connected to research already conducted at the Faculty of Law and partly touch on new topics.

Topics for Legal Research Scholars at the law faculty research future legal challenges when robots interact with humans. Part of the research program is an SNF funded project exploring "Human-Robot Interaction: A Digital Shift in Law and its Narratives? Legal Blame, Criminal Law, and Procedure". Because robots are on the rise. Increasingly, we cooperate with standalone machines that help us with our chores, such as the automated vacuum cleaner or lawn mower, as well as with programs that are integrated into everyday objects, like driver-assistance systems in modern cars. These robots share the responsibility of a task and, at the same time, gather and process information in order to carry out actions autonomously, based on processing large amounts of data and machine-learning techniques, including Artificial Intelligence (AI). Examples are:

"Trust in human-robot interaction - On the validity of the trust principle in negligence liability in Switzerland".(PhD Project) Smart safety devices in modern cars as well as intelligent tools in surgery provide poignant examples of human-robot collaboration that foreshadow a number of specific effects upon penal law. Within the domain of substantive criminal law, the demarcation of a negligent act from an intentional crime could change entirely if the mens rea could be inferred from a person's response to (ro)bot advice. For instance, if a drowsiness detection system alerts a driver to take a break but the driver continues, eventually causing an accident, courts may be inclined to infer negligence or even intent as a result of the driver disregarding the advice. Nonetheless, criminal procedure might not grant an adequate defense to human drivers.

"Defense Rights against Robot Testimony"(PhD project analyses) In criminal proceedings, human testimony - most notably a defendant's submissions -, will have less significance, while machine-based evidence will gain traction. This includes evaluative data from machine systems that collect information via sensors and can make their own evaluation of a situation based on this collected information. The operations of such systems are very complex. This leads to various problems that need to be clarified if evaluative data is to be used as evidence in a criminal trial, especially since the defense must be allowed to thoroughly examine such "robot evidence". For example, if a drowsiness detection system assesses a driver as sleepy, how can the defense challenge this machine-evidence if it is presented against the driver in court? Finally, verdicts will have to explain how machine-generated data was evaluated against human statements to justify an acquittal or conviction.

For further information see: Human Robot Interaction

Life sciences law is dedicated to the legal framework for the use of living organisms in technology. This includes the application of techniques and technologies to living organisms, i.e. humans, animals and plants. The research area is devoted to all related normative, ethical and technical issues.

The legal issues of life sciences law have numerous interdisciplinary points of contact, among others with biochemistry, bioinformatics, medicine, (molecular) biology and pharmacy.

The use of biotechnologies and information technologies in everyday life harbors not only numerous potentials but also challenges. From a legal perspective, for example, the question arises as to who can dispose of genetic information or extracorporeally stored body parts and on what legal grounds. Research in this area is also attempting to trace the dissolving or shifting boundaries between "technical" and "living" and to address them from a jurisprudential perspective. An application example of concrete research is the handling of autonomous cardiac pacemakers, the use of which can be suddenly agonizing at the end of the patient's life.

Who owns data? In order to investigate this question from a legal and, above all, a factual perspective, the research area "Data Governance" deals with questions of allocation that concern rights to data in general and in the field of life sciences in particular. This includes questions of data access, possibilities of data portability and challenges arising from the exercise of data power. Practical application questions arise in particular in the area of mobile and personalized medicine, which is becoming increasingly important.

Both in the field of drug research and in health apps and other applications, algorithmic systems are increasingly being used whose largely autonomous mode of operation is described by the buzzword "artificial intelligence" (AI). From a jurisprudential perspective, this raises questions about the possibilities and the necessity of a normative framework for this technology. This applies in particular to the field of life sciences, where the regulation of medical apps - as distinct from mere lifestyle products - and the use of AI in diagnostics are among the issues at stake.

The use of distributed ledger technologies (DLT), in particular blockchains, is also an issue in the field of life sciences: In the future, it should also be possible to exchange patient data between different players in the healthcare system on such decentrally organized databases. One possible advantage of the technology used in this way is to give patients better control over their data. However, such systems pose numerous legal challenges because data is not only stored in a decentralized and unalterable manner, but the consensus on the system logic is also created in a decentralized manner. Among other things, data protection law and liability law are affected.

The rapid pace of technological change is giving rise to a questioning of traditional legal principles in the area of civil law. This concerns questions about the beginning of human life and personality as well as those about the end of life, such as the time of death. The research area also aims at a comprehensive examination of questions in reproductive medicine, especially in the area of parentage law, pre-implantation diagnostics or family law in general. This results, for example, in a tension between the status of the institutions of family and parenthood on the one hand and new realities of life - shaped by the use of state-of-the-art technologies - on the other.

In this public law-oriented research area, necessities and possibilities of a normative framework for the field of medicine and biotechnologies are being elicited. New developments and inventions in the field of technology-based cutting-edge medicine may require an adaptation of the law to the "digital age" in order to be able to apply state-of-the-art diagnostic and treatment procedures in a legally secure manner in the future. Consideration of the international dimension is essential in this area, especially since numerous procedures in transplantation medicine or genetics are only made possible by this transnational exchange between experts in the context of studies or research in general. The aim is therefore to create legal certainty for researchers, universities and the pharmaceutical and medical industry without restricting disruptive developments through overregulation.