19/04/2024 4:03 PM

Tartufocracia

Be life confident

can AI and ML really help fight fraud?

Fraud has been a problem since coins were introduced more than two thousand years ago. In today’s digital age, the problem won’t go away – indeed, it’s getting worse despite this industry’s best efforts. Many start-ups and established firms claim to use AI and ML to reduce fraud threats and the value lost to fraud. But given fraud’s fast-changing nature, can machines really help us?

James Wood, PCM Managing Editor, asks whether we should believe the hype.

Last year’s Money2020 conference in Las Vegas was bursting with companies claiming to fight fraud using Artificial Intelligence (AI) and Machine Learning (ML). This correspondent could have spent his first day and a half at the show meeting AI and ML firms pitching their wares to reduce threats and losses: if crypto and blockchain were 2018’s buzzwords, then AI and ML take this year’s crown.

There’s no doubt, though, that these firms are addressing a major problem. Globally, payments fraud costs £3.24 trillion annually, according to the Center for Counter-Fraud Studies – that’s equal to the GDP of the UK and Italy. In the UK alone, it’s estimated GDP would rise by £44 billion annually if payments fraud were better tackled. But fraud is a slippery target, as recent data from FICO shows.

FICO’s research into UK fraud shows how attacks mutate, switching not just from type to type, but in frequency and intensity. Last year, Card-Not-Present fraud (associated with online commerce) rose by 24 percent in the UK – globally, Experian says 37 percent of businesses are experiencing more of this fraud than previously. As the graph on page 19 shows, the industry has been aware of CNP fraud for some years and has stepped up its efforts to tackle this problem. However, the fastest-emerging threat in the UK comes from ID fraud – 70 percent of which is closely associated with the theft of PIN numbers from social media accounts. This kind of fraud rose by 48 percent last year in the UK – and now the industry is scrambling to find a response.

Given the rapid emergence of such threats and their severity, it’s no wonder regulators are taking matters into their own hands – as the EU’s mandate for Strong Customer Authentication (SCA) demonstrates. Set to come in to force at the end of 2020, SCA is unpopular with some merchants and banks given its insistence on multi-factor authentication. Most commentators agree SCA will reduce fraud, even if its provisions end up causing user friction and a more fragmented user experience.

At first sight, then, it looks like changing in user inputs to confirm ID will reduce fraud – despite possible increased friction and lower sales. However, Kushal Shah, SVP of Global Product and Expansion at Ekata, a US firm specialising in securing digital identity, says this is “a short-term solution.” According to Shah, “user needs for a smooth buying experience [online] will push firms to introduce something more sophisticated in the longer term. There’s no doubt that the SCA mandate will increase friction considerably from January 2021 – friction at checkout is going to be a huge problem in North America and Europe.”

Part of the problem stems from understanding exactly what AI and ML are, and what these functions can – and can’t – do. As Vinay Sridhara, CTO at Balbix, a data protection and cybersecurity firm, puts it, “From the perspective of data security and privacy, a lot of AI and ML offerings are basic – and yet claim to be sophisticated.”

Those who have lived through hype cycles in this industry will know what Sridhara means. As this year’s buzzword, some might think slapping “AI-enabled” on your pitch document could net you a few extra million dollars in funding. Resisting such cynicism, there’s no doubt we urgently need AI that can fight fraud in real time, using vast data arrays and throughput, to automate fraud identification and reduce losses. We also need to disrupt known and unknown threat vectors effectively – especially since such threats are multiplying all the time.

Fraud for sale

Sune Gabelgård, Head of Digital Fraud Intelligence at Danish payments processor and services firm NETS, says threat vectors are constantly becoming more unpredictable and sophisticated. “We now face a fraud supermarket situation, where almost anyone can undertake payment fraud. Criminals can bulk-buy card data on the dark web, including user IDs as well as card numbers.”

Given that context, not all AI-based projects will succeed – although it’s tough to get right, that doesn’t mean AI doesn’t have its uses. Gabelgård says that AI is best understood as a “brain amplifier”, helping humans to make smart decisions, rather than as a solution in its own right. He also points out that the introduction of SCA in Europe will cause fraud to migrate elsewhere, rather than reducing fraud across the continent. In support of this, he cites recent attempts by a large Nordic bank to reduce cross-border payments fraud by introducing geo-blocking: once geo-blocking happened, fraud simply migrated from geo-location crimes to CNP attacks within six months.

Such comments highlight the limits of rules-based interventions and regulations: no sooner have legislators passed rules to reduce one type of fraud – Chip and PIN led to a vast reduction in mag stripe and signature fraud, for example – than other fraud types rapidly appear. It’s clear that automating the identification and prevention of fraud using intelligent algorithms and “deep” machine learning techniques has obvious benefits – provided that it works.

Pushing protection

It will come as no surprise that AI’s proponents want you to believe their algorithms produce dramatic improvements in fraud detection and prevention. According to a recent Brighterion survey, almost 73 percent of major financial institutions use AI and ML in their anti-fraud work and, of these, 80 percent believe – a crucial word – that these technologies help to reduce fraud. Some 64 percent of users also believe – that word again – that AI and ML will be able to help stop fraud before it happens.

For many in payments, especially the insurers who underwrite the 75 percent of unrecovered payments losses, that’s the holy grail of fraud management: being able to predict and prevent fraud before losses are incurred.

Most would agree that AI is presently some distance from that Nirvana. And yet that does not negate the use of these technologies – especially for closely-defined use cases, and within specific parameters. Before going further, it’s appropriate to define terms: machine learning (ML) refers to analytic techniques that “learn” patterns in datasets without being guided by a human analyst. On the other hand, Artificial Intelligence (AI) refers to the broader application of specific kinds of analytics to accomplish tasks. One way to understand the difference is to think of machine learning as a way of building analytical models, and AI as the use of those models.

Machine learning helps companies to work out which transactions are most likely to be fraudulent. One of the most effective applications of AI and ML is in the elimination of “false positives” – those transactions which appear fraudulent, but in fact are legitimate. Through the application of basic rules and checks, AI and ML have proven to be very effective in reducing false positives. One case study from Teradata claimed that the implementation of AI and ML helped to reduce false positives by 60 percent – a figure expected to rise to 80 percent as models continue to learn.

The drive for data

However, there’s a catch: any model is only as good as the data on which it’s based – and the data which that model then receives to work with. Ekata’s Kushal Shah is acutely aware of this problem, noting that, “properly labelled, rich and clean data is a problem for a lot of businesses. 70 percent of deploying an effective ML model is accurately cleansing and labelling data – there are now specialist companies emerging that can help with this task.”

Data quality does indeed appear to be at the centre of effective AI and ML use in fraud management. Although so-called “unsupervised” models – which learn to identify exceptions in vast arrays of data by learning what constitutes a “normal” transaction, then flagging anomalies – have proven effective in flagging false positives and known fraud types, the example of One-Time Password (OTP) compromise in India demonstrates how the use of AI is far from infallible.

India: fraud on the move

The dynamism of India’s payments market cannot be denied, nor can its increasing sophistication. However, this growing sophistication has brought its own challenges, and India is now set to take over the UK as the major market most at risk of payment fraud outside the United States – see graph below.

Since the beginning of its government demonetisation drive in 2016 and the drive towards digital payments, new payments methods have proliferated in India. These new methods have been embraced – especially by the 65 percent of India’s population that lives in rural areas. The success of digital peer-to-peer systems like PayTM is well-established; this success means some eight billion mobile transactions per month are now processed in the country, according to data from InfoSys Finacle.

Anticipating the risk of fraud and appreciating the need to act, the Indian government created a financial inclusion programme called Jan Dhan to allow affordable access to financial services, including payments. Data from the operations on these accounts was stored on a centralised database, and India’s private banks (many of whom have up to 40 million customers) were ordered to maintain their own databases of accurate and clean transaction data. In parallel, the government introduced a comprehensive digital ID system called Aadhaar, the purpose of which was to ensure all parties to a transaction could be accurately verified. Finally, electronic Know Your Customer (eKYC) routines were introduced for all transactions to confirm user IDs – alongside AI routines to identify and flag anomalous transactions.

Despite all this planning and caution, the compromise of One-Time Passwords (OTP) over SMS as a means of fraud became so widespread that users began to lose confidence in the system. The Indian government acted quickly, mounting a communications campaign to raise awareness of the perils of “phishing” for social media access that enables OTPs hijacking. Notwithstanding the relatively advanced AI employed and the rich, clean data on offer, OTP compromise rose 15 percent by volume and 74 percent by value between 2018
and 2019.

In the end, India’s government has reverted to a regulated solution, mandating the use of ambient possession checks over mobile networks – in which a device’s SIM card is checked against its registration details – to confirm that transactions are happening on the correct devices. As with the introduction of SCA in Europe, such measures would not be required if AI were as effective as some of its proponents claim.

As Rajashekara V. Maiya of InfoSys Finacle says, “AI and ML can help identify usage patterns and reduce false positives, and help to prevent fraud in these ways. But what is needed is a marriage between multi-factor authentication on the one hand, and effective AI on the other.” Balbix’s Vinay Sridhara agrees, adding: “it’s possible to envisage a multi-layered fraud prevention system operating on confidence levels. AI and ML can work to a certain confidence level to flag up anomalous transactions; after that, confirmatory factors may be required. The problem is that most AI and ML companies consider one or two vectors at most; the more holistic one’s approach to AI and ML is, the more effective it will be in fighting fraud.”

In the end, there’s also a question here which goes beyond mathematics and concerns the efficacy of attempts to pit digital intelligence against human ingenuity. AI may be effective at removing false positives, reducing known types of fraud, and smoothing the user experience by eliminating some of the need for confirmatory factors – but whether we will see it reach the promised land of predicting fraud before it happens remains, at best, an open question.

Source Article