In the spirit of Halloween, here’s the spooky tale of the deepfakes capable of raising the dead…
OK, maybe not literally. But, using deepfake technology, a company called MyHeritage allows visitors to upload a photograph of deceased family members, which can then be animated into video.
The company says that it’s intended “for nostalgic use…to bring beloved ancestors back to life”. But once again, it reminds us that deepfakes can pose a threat to society, governments, and enterprises.
How could deepfakes of the dead be used for fraud in financial services?
Deepfakes are videos or images created using AI-powered software to show people saying and doing things that they didn’t say or do. They have been used for pranks and entertainment, but also for more malicious purposes. The number of deepfake videos posted online is more than doubling year-on-year.
Let’s take a quick look at the ways that deepfake technology could be used by fraudsters to commit financial crime:
Ghost fraud refers to the process of using the data of a deceased person to impersonate them for financial gain. Ghost fraudsters can use a stolen identity of an individual to access online services, savings, and credit scores, along with applying for cards, loans, or benefits. Using deepfakes of the dead, criminals could make ghost fraud far more convincing.
New Account Fraud
New account fraud, also known as application fraud, is when fraudsters use fake or stolen identities specifically to open bank accounts. Fraudsters can max out credit limits under the account name or take out loans that are never paid back. New account fraud is growing, accounting for $3.4 billion losses, and deepfakes of the dead could be used by fraudsters in their crimes.
Synthetic Identity Fraud
Synthetic identity fraud is a sophisticated and hard-to-spot form of online fraud. Fraudsters create identities using information from multiple people. Instead of stealing one identity—such as a recently deceased person’s name, address, and social security number—synthetic fraudsters use a blend of fake, real, and stolen information to create a “person” who doesn’t exist.
Fraudsters use synthetic identities to apply for credit/debit cards or complete other transactions that help build a credit score for non-existent customers. A deepfake of a deceased person could be used to bolster a synthetic identity.
Annuity/Pension/Life Insurance/Benefit Fraud
Another potential use of deepfakes of the dead is in annuity/pension, insurance, or benefit fraud. A deceased person could continue to claim a pension for years, whether by a professional fraudster or a family member. Genuine Presence Assurance from iProov can provide insurers and governments with the proof-of-life assurance that is needed to avoid such fraud.
Financial crime is estimated to cost around $1.4 to $3.5 trillion in the US annually. Crucially, Mckinsey found that forms of synthetic identity fraud are the fastest-growing types of financial crime. And this was before Covid-19, when the use of digital channels to complete everyday tasks increased.
What are deepfakes? How does the technology work, exactly?
Deepfake technology is, ultimately, a form of synthetic media. It’s powered by artificial intelligence and deep learning. AI neural networks are trained on a dataset of images and video, learning to generate a person’s likeness onto another. The more data it has, the more accurately it can generate a likeness, match mannerisms and expressions, and the more realistic the fake videos can be.
Deepfakes have been garnering increased attention in the public eye. You may have seen fake videos of celebrities circulating social media without even realizing it. Think back to the Zuckerberg video of 2019, which was followed closely by Facebook’s sitewide ban of synthetic video in January 2020. More recently, a computer-generated video of Tom Cruise on TikTok went viral across the web. There was also Channel 4’s infamous deepfake of the Queen, who delivered an alternative Christmas message in the UK.
But what about regulation and legislation? There must be some restrictions, right?
Well, not quite. Regulations are coming. The US government approved a bill in November last year, ordering further research into deepfakes. The UK government has been evaluating legislation to ban non-consensual deepfake videos.
Should you be worried about deepfakes?
Enterprises and governments need to protect their citizens and customers. Consumers are already concerned about deep fakes. In a study, The Threat of Deepfakes, we found that:
- 75% of consumers are more likely to use online services that protects them from deepfakes
- 85% believe that deepfakes will make it harder to trust online services
The use of deepfakes is growing, as is synthetic identity fraud. Retail banking, regulated insurance, and payment gateway providers are key targets for deepfake crime.
Many deepfake videos are low quality. At the same time, there are ways of spotting if a video is likely to be a deepfake—changes in eye color, inconsistencies around the hairline, and other visual strangeness. However, don’t be misled: deepfake technology is becoming more and more sophisticated. Deepfakes that can’t be detected with the human eye are already out there.
So, what’s the answer? Where does biometrics come in?
iProov’s Genuine Presence Assurance technology protects organizations and users against the threat of deepfake fraud. Our patented solution uses a series of colors in light to verify that a person is the right person, a real person, authenticating right now. This means that banks, governments and other organizations can use face biometric authentication to securely verify the identity of users.
iProov are sponsoring the Future Identity festival, November 15th & 16th 2021, at The Brewery, London.