Artificial Intelligence in Medicine

A Legal Study from the Perspective of Liability Law

Artificially intelligent systems (AI systems) are increasingly finding their way into medicine and constitute a central component of personalized medicine, among other things. These systems have moved beyond the research stage. More and more applications are about to enter the market or have already reached medical practice. In the future, it is expected that AI systems will be used in all phases of patient treatment: from prevention to diagnosis and therapy selection to specific therapy measures, follow-up care and rehabilitation.

The application of a novel technology such as AI in an area as sensitive as medicine raises numerous legal issues. The technical characteristics of AI systems give rise to novel risks, which may prove problematic in the enforcement of liability claims. Based on their capacity for digital autonomy, AI systems are able to perform certain tasks in medicine independently, without comprehensive human guidance. These characteristics pose a legal dogmatic challenge insofar as our (liability) law, according to its basic conception, assumes autonomy only in humans but not in machines.

Against this background, numerous legal questions arise, which Dr. Djamila Batache (Attorney-at-Law) thoroughly examined in her dissertation project supervised by Prof. Dr. Ulrich G. Schroeter: In which areas of the medical treatment relationship is the use of digitally autonomous AI systems by medical professionals permissible at all? What due diligence precautions do medical professionals have to observe in the process? To what extent are they responsible for digitally autonomous behavior, or to what extent can such behavior be attributed to them? The study tries to answer these questions for current law and aims to find out whether the existing law is insufficient, inappropriate or incomplete, and thus needs to be amended.