Due Diligence Requirements for Automated Anti-Money Laundering
Exploring Ways for Limiting Criminal Liability when Using Artificial Intelligence
The chatbot ChatGPT pushed Artificial Intelligence (AI) into public consciousness. But AI does not only promise to write better texts or to code better. Also for various corporate processes, AI systems promise to perform certain tasks better and more efficiently than they are currently being carried out by humans. This is particularly true in the area of anti-money laundering: Compared to rule-based systems used so far, AI systems promise significant improvements in detecting suspicious transactions and a considerable increase in efficiency. However, AI systems not only offer advantages, but also entail new risks: On the one hand, they are usually a black box for the users, i.e. their automated decision-making is neither ex ante nor ex post comprehensible. On the other hand, AI systems may also make mistakes. Mistakes that are criminally relevant are particularly serious: If, for example, an AI system fails to detect a suspicious transaction, a bank or securities firm can be prosecuted.
Unlike for example Luxembourg, Switzerland currently does not plan to certify AI systems that can be used in banks for transaction monitoring. And while managers can exonerate themselves in the case of criminally relevant mistakes made by employees by complying with certain due diligence requirements, it is currently unclear what due diligence requirements managers must take into account when using AI systems. Therefore, the risk of criminal liability appears to lie entirely with the banks using AI systems and thus could discourage them from actually using such systems. Such a de facto barrier to the use of AI systems contradicts the legislator's desire to increase efficiency and digitalization in the banking sector.
The aim of Lea Bachmann’s PhD project, which is supervised by Prof. Dr. Sabine Gless, is firstly to analyze the criminal liability of managers and companies that use AI systems under Swiss law. Secondly, boundaries of criminal liability are to be defined by elaborating due diligence requirements for the use of AI systems based on the legal analysis. Alongside the findings from the legal analysis, a comparison with Luxembourg’s approach regarding the use of AI systems to prevent money laundering will be taken into account.
The project’s importance extends beyond the area of anti-money laundering: It is also significant for the use of AI systems for supply chain monitoring, autonomous trading bots or funds managed by AI systems. Generally, the use of AI systems in companies raises the question as to which due diligence requirements managers and companies have to comply with, in order to, where possible, prevent criminally relevant mistakes through their systems, and at the same time to guarantee that they will not be exposed to strict liability in case of (unavoidable) mistakes of their systems.
Lea Bachmann, by elaborating due diligence requirements for the use of AI systems in companies (similar to the criminal liability of managers for employees), aims to close a gap in criminal liability. Such due diligence requirements could thus not only eliminate legal uncertainties in the use of AI systems in transaction monitoring, but could be applied wherever AI systems are used within companies.
To few the SNF project page please klick here.