top of page

AI and Financial Crime: A Double-Edged Sword



The exponential growth and adaptation of artificial intelligence (AI) in various sectors have led to significant changes in the financial industry.  AI went mainstream at the end of 2022, with self-driving, image identification, deep fakes, and data analysis.  ChatGPT, the first mover and most successful of AI chatbots, was launched in November 2022, and by January 2023 had reached 100 million users.  For the financial crime fighter, the question needs to be asked: how many of these users have malicious intent? With criminal organizations swiftly embracing this technology for nefarious purposes, banks and other financial institutions face the challenge of evolving their fraud-detection and anti-money-laundering systems at a similar pace. 

 

AI Used for Criminal Purposes 

Criminals have swiftly transitioned into the digital age, leveraging AI for activities like social engineering, catfishing and phishing. More sophisticated forms of fraud using AI are already in use – criminals engaging in insurance fraud are using digitally-altered files known as ‘shallowfakes’ to make false claims, submitting fraudulent documents or photos and leaving firms who have not yet integrated validation technology defenceless against sophisticated fraud.  


Not only this, but the criminal pool is now able to expand. Individuals without formal training, language skills or previous experience are now equipped with AI tools for use in financial crime.  A ring targetting UK firms and customers may have no connection to the UK or even use English themselves. 

Emerging AI-based social engineering tactics, including authorised-push-payment fraud, fake websites, and counterfeit invoices, further accentuate the urgent need for tools to counteract these threats. 


Detecting fraudulent activities becomes increasingly difficult as the boundaries between genuine and AI-generated content blur. Recognizing the subtle tells in fake IDs, scam emails, and suspicious texts is paramount.  Human analysts and regtech tools will need to learn to identify AI-generated frauded documents.  

There’s an echoing concern: banks and insurance firms are becoming victims of AI-powered fraud, and must deploy advanced technology to detect such crimes. 

 

Using Data to Combat Financial Crime 

For many practitioners, the key to combatting AI-powered fraud will be through data-sharing.  Banks and firms who have access to large datasets will be able to identify patterns and rings, and flag fraudulent activity before a transaction occurs. 


As transaction volumes and cross-border transaction volumes increase year-on-year, there’s a dire need for improved transaction monitoring and shared datasets.  These datasets will allow both conventional rule-based Boolean logic and machine learning (ML) to flag suspicious activity. 


Currently bank secrecy rules, which are intended to protect privacy, prevent entities from sharing valuable data in the fight against financial crime.  Open banking would allow financial institutions to rapidly determine risk profiles of customers and transactions, and proponents of AI in the fight against financial crime believe that a bank’s immediate access to a customer’s profile and transaction history would enable ML to flag high-risk accounts before fraud occurs.  The coming UK Economic Crime and Corporate Transparency bill will allow for the sharing of customer information for the purpose of preventing, detecting and investigating economic crime.  This would be possible without the involvement of law enforcement. 

 

The Challenge of Regulation 

Regulation, however, often lags behind both innovation and crime.  

In 2021, 193 member states of the UN signed an Ethics of Artificial Intelligence document, which lays out human rights and dignity as the core tenet of the agreement.  The document recommends policy action areas to its members, including data governance. 


A forthcoming EU regulation will classify use of AI by its level of risk.  AI systems may be minimal-risk (including chatbots, computer games, spam filters etc), high-risk (recruitment, credit checks, court and justice systems, biometric data uses), or unacceptable-risk (real-time identification systems for law enforcement, social scoring, subliminal tooling).  One signature piece of the EU AI act would be a watermark system, requiring tech companies to mark content that has been AI-generated. 


The UK government has said that it wants the UK to become an ‘AI superpower’, and while white papers have been published on AI principles, and a task force set up to address AI safety, no current timeline is in place for regulation beyond existing data protection bill.  The current GDPR prohibits organisations from making decisions about individuals based solely on automated processing, while a future amendment may allow these decisions if there is a system for individuals to contest the decision with the organisation’s data controller.  

While the regulation does not yet fully exist, it is clear that different jurisdictions will have different regulations.  This will complicate compliance for firms, and multinational organisations will need to comply with each jurisdiction’s AI regulations. 

 

The Way Forward 

For the financial crime fighting community, the future rests on relentless innovation. Identifying AI-generated content, understanding its potential and risks, and developing robust technological tools is essential. Considering the vast userbase and applications of AI-based tools, even a small percentage of illicit activities can have significant repercussions. Banks and financial institutions will need to make use of pioneering technology to stay ahead of the game and mitigate risk, whilst ensuring compliance with changes in jurisdictional regulation. 


This article is based on topics discussed by a panel at Royal United Services Institute, 20 September 2023, “Artificial Intelligence: Friend or Foe in Fighting Financial Crime.

 

Need guidance with your firm’s best practices?  EFI has a team of financial crime experts and analysts, and provides screening, transaction monitoring and KYC remediation to financial institutions. 

 

 

bottom of page