How AI is Used in the AML?
Financial institutions are working to improve risk-based approaches for fighting money laundering, with financial crime on the rise. The large amount of data generated by AML compliance and the increasing sophistication of criminal tactics requires the continued exploration of innovative technologies to meet regulatory obligations. Firms all around the world are finding how AI solutions may help them improve their compliance performance by recognizing dangers to assist. This article touches upon global regulatory viewpoints on its implementation, as well as some best practice use cases in different countries.
What FATF Says About AML, AI and Machine Learning?
In a report published in 2021, the Financial Action Task Force (FATF) addressed AML AI compliance options under the heading “Opportunities and Challenges of New Technologies for AML/CFT.” The process of using sophisticated computing methods to carry out operations that call for human intelligence is known as artificial intelligence (AI).
The FATF identified machine learning, a type of AI, as having substantial AML/CFT potential. This is because machine learning may be used to teach computers to learn from data without human intervention.
What Are the AI Regulations?
Global regulators are also investigating the utility of AML AI tools. The following are some of the most important regulatory perspectives from throughout the world:
US
Supported by federal authorities, the Financial Crimes Enforcement Network (FINCEN) published a statement asking depository institutions to consider, evaluate, and responsibly implement innovative approaches when using AML AI technologies.
FINCEN acknowledges the potential of AI applications to better manage money laundering and terrorist financing risks while lowering the cost of compliance. Thus, when incorporating AI applications, US financial institutions should consider considerations such as:
- AI applications may improve existing AML/CFT compliance processes.
- Security hazards and third-party risk management challenges related to AI applications.
- AI applications’ compatibility with current AML/CFT compliance obligations.
UK
The UK’s Financial Conduct Authority recommended regulators to ease the safe implementation of this technology. Some factors, such as the coronavirus pandemic which happened in 2020, had accelerated the use of AI in the financial sector, as The Financial Conduct Authority said. Also, companies are required to consider the consequences for AML/CFT.
Germany
Germany’s financial regulator (BaFin) has held many meetings with companies to investigate the AML/CFT implications of AI. In its 2019 study, BaFin noted that AI might improve the detection rate of anomalies and patterns, and thus increase the efficiency of compliance processes. They also stated that regulators must be able to assess the algorithms of AML AI compliance solutions and may impose minimum supervisory criteria for this reason.
France
The Autorité de Contrôle Prudentiel et de Résolution (ACPR) of France states that key elements that should characterize its integration were delineated during an earlier discussion, which focused on the explainability and governance of AI and machine learning in financial institutions. These elements include:
- Financial institutions should ensure that AI solutions support a crucial business operation or function.
- AI applications should be understandable and interactable by both compliance staff and customers.
- Financial institutions should be aware of the biases and hazards associated with human interaction in AI applications.
Singapore
In its 2018 publication Principles to Promote Fairness, Ethics, Accountability, and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics (AIDA) in Singapore’s Financial Sector, the Monetary Authority of Singapore (MAS) outlined its expectations for the integration of AI AML applications. Financial institutions should evaluate each of the four FEAT principles, as directed by the MAS:
- According to MAS, the usage of AI applications should not discriminate against any groups or persons.
- Firms must establish internal governance frameworks to evaluate the reason for AIDA-driven decisions.
- Companies that use AI should ensure that they operate in accordance with their ethical standards. These criteria should be applied as strictly to AI applications as they are to any other component of the service offering.