In the digital age, financial institutions, particularly in sectors like cryptocurrency, online trading, financial institutions with nearly no offices (think of Revolut), rely heavily on digital means for client acceptance procedures to verify the identities of users. These processes are critical for regulatory compliance, particularly under the AMLD (Anti Money Laundering Directive) for preventing fraudulent activities such as money laundering, terrorism financing and identity theft. However, as these procedures move online, they become increasingly vulnerable to new risks, particularly those posed by recent advancements in AI. The development of hyper-realistic videos, known as deepfakes but now generated by AI, presents a significant challenge to the integrity of KYC processes. This article explores the emerging threats posed by AI, particularly deepfakes, to KYC procedures and proposes a framework to mitigate these risks.

This article is written by Mourad Seghir ([email protected]). Mourad is part of RSM Netherlands Business Consulting Services with a specific focus on Finance & Technology.

Since the release of OpenAI’s ChatGPT in 2022, there has been a dramatic spike in AI-powered fraud. The capabilities of AI, particularly in generating hyper-realistic images and videos, have led to a global epidemic of fraudulent deepfakes. From 2022 to 2023, the number of deepfakes surged tenfold, with North America experiencing a 1740% increase, the Asia-Pacific region (APAC) 1530%, Europe 780%, the Middle East and North Africa (MENA) 450%, and Latin America 410%. In 2023, AI-powered fraud was identified as one of the top five types of identity fraud globally (Muhammad Usman Hadi et al., 2023). 
 

This rise in AI-driven fraud poses significant risks to the financial sector, in this context for KYC and client acceptance procedures.

The Threat of Deepfakes to KYC Processes

AI technologies have advanced rapidly, particularly generative models capable of creating hyper-realistic digital content. For instance, Flux Realism has developed a model that can generate videos nearly indistinguishable from real footage. This technology, when combined with other powerful models like Runway Gen-3 Alpha, can create videos that are not only realistic but also seamlessly manipulate visual and audio elements in ways previously unimaginable.  These advancements introduce significant risks, especially for KYC processes that rely on visual verification.

A common method in digital KYC procedures involves asking users to record a video of themselves holding a sign with a handwritten message. This method has been widely adopted by financial services due to its simplicity and the perceived difficulty of falsifying such a video. However, with the advent of AI-generated deepfakes, this approach is becoming increasingly vulnerable. AI models can now produce convincing videos that can deceive KYC systems, allowing bad actors to impersonate legitimate users and gain unauthorized access to financial services.
________________________________________
Case Study: Video-Based KYC in Digital Financial Services

To illustrate the risk, consider a typical KYC process used by many digital financial services. A user is instructed to write a specific phrase on a piece of paper, hold it up to the camera, and record a video of themselves showing the paper and speaking the phrase. This video is then used to verify the user's identity. While this method has been effective in the past, the emergence of deepfake technology undermines its security. A malicious actor could use AI to generate a video that mimics this process, creating a convincing but entirely fake identity verification video. Here is an example.
The potential for AI-generated deepfakes to bypass KYC procedures represents a significant threat to the financial sector. If these fraudulent identities are not detected, they could be used to launder money, commit fraud, or engage in other illegal activities, undermining the trust that is essential to the financial system. This risk is particularly concerning in light of AMLD, which mandate stringent KYC processes to combat money laundering and terrorism financing. 
________________________________________

Existing Approaches to Mitigate AI Risks

Traditional methods include watermarking AI-generated images or embedding information into the metadata of digital media files. These methods are intended to help distinguish between authentic and altered content. 
However, these approaches have significant limitations. Watermarking, for example, relies on the creator of the content to apply the watermark, which can easily be bypassed by bad actors. Furthermore, watermarks can be removed through basic image editing techniques, or AI models can generate content without applying any watermark at all. Metadata labelling also depends on self-reporting and can be stripped or altered during the file transfer or editing process. 

Given these challenges, it is clear that traditional methods alone are insufficient to address the risks posed by AI in KYC procedures. This calls for more robust solutions that can provide greater security and trust in the digital verification process. 

A New Approach 

To address the limitations of current methods, the "Chains of Trust" model could be used as a more comprehensive solution for ensuring the authenticity of digital content. This model leverages advances in cryptography (in some cases blockchain technology) and credentialed identity to create a secure and verifiable framework for digital identities.

The "Chains of Trust" model works by linking digital credentials, known as Verifiable Digital Credentials (VDCs), to the content creator's identity. These credentials are cryptographically signed and can be used to verify the authenticity of the content.

For example, in the context of KYC, a financial institution could issue a VDC to a user after verifying their identity. This credential could then be used in future interactions to prove the user's identity without requiring them to go through the verification process again.


 

Moreover, the "Chains of Trust" model incorporates Unique Identifiers (UIDs), which are machine-readable names used to refer to specific entities, such as individuals or organizations. By linking these UIDs to VDCs, the model creates a chain of verifiable information that can be used to establish trust in the digital identity of the user. This approach not only enhances the security of KYC processes but also provides a more scalable and interoperable solution for digital identity management.

Thought Leadership

To effectively combat the risks posed by AI-generated deepfakes in KYC processes, it is essential to adopt a comprehensive framework like the "Chains of Trust." This model offers a robust solution that can enhance the security and reliability of digital verification processes, making it more difficult for bad actors to exploit AI technologies for fraudulent purposes.

Financial Institutions should consider the following recommendations:

In conclusion, while AI technology offers numerous benefits, it also presents significant risks to the security of KYC and client acceptance procedures. By adopting advanced frameworks like the "Chains of Trust," the financial sector can mitigate these risks and ensure that digital identity verification processes remain robust and trustworthy in the face of evolving AI threats.

RSM is Thought Leader in the field of International Trade consulting. We offer frequent insights through training and sharing of thought leadership that is based on a detailed knowledge of regulatory obligations and practical applications in working with our customers. If you want to know more, please reach out to one of our consultants.