Fighting AI with AI Fraud Monitoring for Deepfake Applications - KDnuggets

Fighting AI with AI Fraud Monitoring for Deepfake Applications – KDnuggets

Source Node: 2667255
Fighting AI with AI Fraud Monitoring for Deepfake Applications
Photo by Tima Miroshnichenko
 

Deepfakes have been a big topic of conversation in the data science community for some years now. Back in 2020, the MIT Technology Review posited that deep fakes had hit their “tipping point for mainstream use”.

The data certainly backs that up. The Wall Street Journal reported that less than 10,000 deepfakes had been found online in 2018. Those numbers now run into the millions, and there are many real-life examples of deep fakes being used both to confuse and misinform and to perpetuate financial fraud. 

Deepfake techniques are altogether providing cybercriminals with many sophisticated options.

They go way beyond the ability to insert the image of a celebrity into promotional material for an “unmissable” Bitcoin offer, which – of course – turns out to be a scam. Deepfake videos, in particular, are on the radar of fraudsters. They provide them with a way to get through automated ID and KYC checks and have proved frighteningly effective.

In May 2022, The Verge reported that “liveness tests” used by banks and other institutions to help verify users’ identities can be easily fooled by deep fakes. The related study found that 90% of the ID verification systems tested were vulnerable.

So what’s the answer? Are we entering an era where cybercriminals can easily use deep fake technology to outwit the security measures used by financial institutions? Will such businesses have to ditch their automated systems and revert to manual, human checks?

The simple answer is “probably not”. Just as criminals can make use of the surge in AI advancements, so too can the companies they target. Let’s now look at how vulnerable businesses can fight AI with AI.

Deepfakes are produced using a range of artificial intelligence techniques, such as:

  • generative adversarial networks (GANs) 
  • encoder/decoder pairs
  • first-order motion models

These techniques may, on the face of it, sound like the exclusive preserve of the machine learning community, complete with high barriers to entry and a need for expert technical knowledge. However, like other elements of AI, they have become considerably more accessible over time.

Low-cost, off-the-shelf tools now allow non-technical users to create deep fakes, just as anybody can sign up to OpenAI and test the capabilities of ChatGPT.

As recently as 2020, the World Economic Forum reported that the cost of producing a “state of the art” deepfake is under $30,000. But in 2023, Wharton School professor Ethan Mollick revealed, via a viral Twitter post, that he had produced a deep fake video of himself delivering a lecture in under six minutes.

Mollick’s total spend was $10.99. He used a service called ElevenLabs to almost perfectly mimic his voice, for a cost of $5. Another service called D-ID, at $5.99 per month, generated a video based on only a script and a single photograph. He even used ChatGPT to create the script itself.

When deepfakes first began to emerge, a primary focus was on fake political videos (and fake pornography). Since then, the world has seen:

  • BuzzFeedVideos create a deepfake public service announcement “featuring” Barack Obama, impersonated by actor Jordon Peele.
  • A deep fake YouTube video purporting to show Donald Trump telling a story about a reindeer.
  • A deep fake video of Hilary Clinton shown on Saturday Night Live, when she was in fact being impersonated by a cast member.

While these examples show the “fun” side of deepfakes, and perhaps provide a jolt of reality as to the capabilities of the technology, fraudsters haven’t wasted any time in using them for nefarious purposes. 

Real-life examples of fraud, perpetuated using deepfake techniques, are many.

Losses due to deep fake scams range from hundreds of thousands to many millions. In 2021, an AI voice cloning scam was used to arrange fraudulent bank transfers of $35 million. This was a huge financial payoff that didn’t even require the use of video.

The quality of AI output, especially video, can vary hugely. Some videos are obviously fake to humans. But, as stated above, automated systems, such as those used by banks and fintech, have proved easily fooled in the past.

The balance is likely to shift further as AI capabilities continue to improve. A recent development is an incorporation of “counter forensics”, where “targeted invisible “noise” is added to deep fakes, in an attempt to fool detection mechanisms.

So what can be done?

Just as fraudsters seek to use the latest AI technology for financial gain, businesses such as tech firms are hard at work finding ways to utilize tech to catch criminals.

Here are a couple of examples of companies using AI to fight the AI:

In late 2022, Intel launched an AI-based tool called “FakeCatcher”. With Intel’s reported reliability rate of 96%, it uses a technology known as photoplethysmography (PPG).

The tech makes use of something that’s not present in artificially generated videos: blood flow. Trained on legitimate videos, its deep-learning algorithm measures the light that’s absorbed or reflected by blood vessels, which change color as blood moves around the body.

FakeCatcher, part of Intel’s Responsible AI initiative, is described as “the world’s first real-time deep fake detector that returns results in milliseconds.” It’s an innovative technology that looks for signs that the person shown in a video is truly human. It looks for something that’s “right”, rather than analyzing data to highlight something that’s “wrong”. This is how it indicates the likelihood of a fake.

Meanwhile, University of Buffalo (UB) computer scientists have been working on a deepfake detection technology of their own. It uses something that avid PC gamers know requires immense processing power to emulate: light.

Claimed by UB to be 94% effective on fake photos, the AI tool looks at how light reflects in the eyes of the subject. The surface of the cornea acts as a mirror, and generates “reflective patterns”.

The scientists’ study, entitled “Exposing GAN-Generated Faces Using Inconsistent Corneal Specular Highlights”, indicates that “GAN synthesized faces can be exposed with the inconsistent corneal specular highlights between two eyes”.

It suggests that it would be “nontrivial” for AI systems to emulate the genuine highlights. PC gamers, who often invest in the latest ray-tracing graphics cards in order to experience realistic lighting effects, will instinctively recognize the challenges here.

Perhaps the greatest fraud detection challenge is the endless “cat and mouse” game between fraudsters and those who work to thwart them. It’s highly likely, in the wake of announcements such as those above, that people are already working on building technologies that can sidestep and beat such detection mechanisms.

It’s also one thing that such mechanisms exist, but another to see them routinely integrated into the solutions that businesses use. Earlier, we referred to a statistic that suggested 90% of solutions can be “easily fooled”. The likelihood is that at least some financial institutions are still using such systems.

A wise fraud monitoring strategy requires companies to look beyond detecting the deep fakes themselves. Much can be done before a fraudster gets far enough into a system to participate in a video-based ID verification or KYC process. Precautions that find a place earlier in the process may also involve an element of AI and machine learning.

For example, machine learning can be used for both real-time fraud monitoring and the creation of rulesets. These can look at historical fraud events, detecting patterns that could easily be missed by a human. Transactions deemed to be high risk can be rejected outright, or passed for manual review before even reaching a stage where there may be an ID check – and therefore an opportunity for a fraudster to make use of deepfake tech.

The earlier a system detects a cybercriminal, the better. There’s less chance that they can perpetuate a crime and less for the business to spend on further checks. Video-based ID checks are costly, even without the incorporation of AI technology to detect deep fakes.

If fraudsters can be identified before they get that far,  with techniques such as digital footprinting, there will be more resources left available to optimize the checks of more borderline cases.

The very nature of machine learning should dictate that, over time, it becomes better at detecting anomalies and fighting fraud. AI-powered systems can learn from new patterns and potentially filter out fraudulent transactions at an early stage in the process.

When it comes to deepfakes specifically, the example above gives a particular reason for hope. Scientists have found a way to detect the vast majority of deepfakes using light reflections. Developments like this represent a considerable step forward in fraud prevention and a considerable roadblock for cybercriminals.

In theory, it’s much easier to deploy such detection technology than it is for fraudsters to find a way to circumvent it – replicating the behavior of light, for example, at speed, and at scale. The “cat and mouse” game seems likely to continue eternally, but big tech and big finance have the resources and the deep pockets to – in theory at least – stay one small step ahead.
 
 
Jimmy Fong is the CCO of SEON and brings his in-depth experience of fraud-fighting to assist fraud teams everywhere.
 

Time Stamp:

More from KDnuggets