When California lawyer Christopher Pitet became a victim of payment fraud earlier this year, the email, as the classic horror movie trope goes, came from inside the house.
A client of Pitet’s had recently settled a legal dispute and the lawyer received an email, seemingly from the opposing attorney, with instructions of where to send the $59,517.50 agreed in the settlement. He promptly wired the full amount over, as requested.
Neither the email nor the instructions were what they seemed. In fact, the message had been sent by a hacker who had installed a monitoring bot on the server of Pitet’s law firm and watched the settlement talks proceed until the precise moment when payment was due. Pitet, a lawyer well-versed in fraud, had unwittingly wired his client’s money directly into the hacker’s account.
Pitet quickly realised that he had been duped and contacted Citibank, which held the hacker’s account. Citi refused to help, saying it was not at fault and not legally responsible to cover Pitet’s losses.
The lawyer sued the bank, arguing that the name on the wiring instructions, which was correct, did not match the routing and account number that Pitet specified, which was incorrect.
But the bank stood firm. “Citibank does prevail in these cases, and, accordingly, does not settle them,” Citi’s in-house lawyer wrote to Pitet in an email, which was shared with the Financial Times.
Most disturbing to Pitet was that Citi in its email cited five cases in the past two years alone in which others, including law firms, had been defrauded in a similar way as Pitet, sued Citi and lost. Sensing a losing battle, he dropped the suit.
Citi, through a spokesperson, said Pitet’s case “lacked legal merit.” The spokesperson added that while Citi works hard to prevent fraud as well as help clients recover lost funds, the bank is “not liable for the actions of those individuals who are deceived into following instructions from criminals”.
From payment frauds like the one that fooled Pitet to imposters using sophisticated models to target people likely to owe back taxes, advancements in AI and the speed of real-time payments have made it easier than ever for scammers to manipulate someone into willingly handing over their money and make off with it just as fast.
Precise numbers of the losses are hard to pin down, with many instances going unreported out of embarrassment or fear of retribution. But the Federal Trade Commission in the US estimates that in 2023 as much as $158bn was lost to all types of scams, up from $137bn in 2022.
Audio, video and images generated by AI — so-called deepfakes — are one of the factors behind that rise. Accounting and consulting firm Deloitte estimates that AI-generated content contributed to more than $12bn in fraud losses in the US last year, and could reach $40bn by 2027.
As the problem has grown in a range of countries, so has the debate between government, banks and technology companies over who should foot the bill when the money cannot be recovered.
In the UK, the government ruled that banks are liable for up to £85,000 in losses. In Australia, more of the blame may be pinned on tech companies.
In the US, the question of who must pay remains unanswered — and is becoming politically fraught. Some senior Democrats want the banks to take more responsibility, and the Consumer Financial Protection Bureau is investigating Zelle, an account-to-account payments system owned by a consortium of large US banks which has been used by scammers.
The banks are fighting to divert fingers from pointing at them, and JPMorgan Chase, the largest US bank, has said it is prepared to sue the CFPB in response to its probe. JPMorgan Chase chief executive Jamie Dimon told an audience of bankers in October: “You can’t have a system where every payment that is knowingly sent, we’re responsible for.”
Banks are instead trying to stick the blame on technology companies including Meta, TikTok and Snapchat, where many scams originate.
In the meantime it is victims like Pitet who are paying the price. The fact Citibank was aware of multiple similar incidents to his suggests an unwillingness to act, the lawyer says. “If they knew people were regularly being ripped off in this way, why didn’t they do anything about it?” he asks. “Banks can, and should, do more.”
These scams are the latest front in the banking industry’s long-running battle against fraud.
In the 1990s and 2000s, criminals targeted ways to scam cashpoints as electronic cards became more popular. As banks clamped down and put in more controls on withdrawals, scammers moved on to cyber hacks and account takeovers to steal a customer’s money.
A warning shot for the industry was an enormous data hack of JPMorgan in 2014, which resulted in the theft of details for more than 80mn households and businesses.
This fostered a pattern where banks have been more focused on defending against criminals breaking through their systems, rather than money leaving accounts.
As banks have beefed up their defences, scammers have identified the customers as a weak link. “We’re in the social engineering phase where criminals are convincing the real consumer to actually initiate those transactions,” says Cleber Martins, head of payments intelligence and risk at ACI Worldwide, a payments group.
“If you bypass all the controls, and if it’s really the customer initiating the transaction they believe they should be doing, why would the banks have to pay them back?”
As the practice has grown, so too has the scale of the losses. “In the last two and a half years, the nature of the scam is they’re going to take everything you have,” says Erin West, a former California prosecutor. “What we were seeing is an industrialised form of attack.”
In some places, online fraud has become a kind of cottage industry. As a California prosecutor, West attempted to help victims of so-called pig-butchering scams where fraudsters living in compounds in countries like Cambodia, Myanmar and the Philippines promote romance scams and induce victims to “invest” in bogus cryptocurrency schemes.
The changing nature of banking has made the issue worse. Online banking apps and real-time payments allow criminals to receive a victim’s cash immediately. ACI estimates that 63 per cent of these fraudulent scams were already conducted over the real-time payment networks in 2023 and that by 2028 this will increase to 80 per cent.
In the US, if a consumer has their debit or credit card stolen, federal law limits their liability for any charges made if the theft is reported promptly. But the rules of the road for fraudulent account-to-account transactions are much less clear.
If someone is induced to send money into someone else’s account in a transaction that ends up being a scam, often there is little recourse to get that money back. The account holder is relying on the willingness of their bank to refund them. But banks argue this is the digital equivalent of handing over cash in the street, and not their liability.
“If they’re held responsible for all of this fraud, which can easily run into billions of dollars every year, then that’s a real cost to the bank and some banks are not going to be able to survive that cost,” says Annemarie McAvoy, head of Clovis Quantum Solutions, a consulting firm that focuses on financial crimes and investigations.
The industry has taken some steps to modernise its defences. A mix of payments companies like Mastercard, Visa and Early Warning, which operates Zelle, as well as consulting firms such as Accenture, have rolled out tools for banks that can rate payment transactions by risk of fraud, either based on the type or size of payment, in a fraction of a second.
Banks can use the scores to reject certain transactions, or justify why they approved others if they do turn out to be fraudulent. But increasingly, governments and regulators want them to do more.
In the UK, regulators set a precedent with a requirement for banks to reimburse victims of authorised push payment fraud, or when someone is tricked into sending money to a fraudster posing as a genuine payee.
But the rules set off a political feud. Initially, the payments regulator set a cap of up to £415,000 per claim — a benchmark that banks and payment companies warned would be ruinous. Under pressure from industry and the government, the regulator eventually lowered that amount to £85,000. The rule came into force in October.
Australia’s government, meanwhile, is going in a different direction in pushing for a new law that would impose fines on social media and telecom companies, on whose platforms schemes often originate, along with the banks for failing to adequately protect consumers.
In the US, the consumer watchdog’s investigation into Zelle was seen as a prelude to potential legislation. But the fate of the probe has been thrown into doubt following Donald Trump’s election victory, with his administration expected to staff the CFPB with a leader who will take a less aggressive stance on big business, or try to scrap it all together.
US senators Richard Blumenthal and Elizabeth Warren have proposed a bill that would put in place a similar liability programme to that of the UK, but it faces an uphill task for immediate passage through Congress.
“For years, I’ve sounded the alarm about fraudsters using peer-to-peer payment services like Zelle to steal from hard-working consumers — and I’ve long fought for banks to refund defrauded customers so they aren’t left high and dry,” Warren tells the Financial Times.
Banks are fighting back, claiming that widening the liability risks making banking services more expensive. “There’s no such thing as free money,” explains Alison Jimenez, president of Dynamic Securities Analytics, which consults on financial crime issues. “So the bank refunds an individual, that loss is going to be spread across other customers through higher fees.”
The sector has also warned that scammers may take advantage of the rules to game the system and pose as victims to illegitimately recoup payouts.
“I would say that some scams should not necessarily be reimbursed,” Denise Leonhard, general manager of Zelle, told an industry conference in November. “I think that it’s going to add more criminals in the system.”
The warnings echo those of the UK banking and payment sector before mandatory compensation from banks was implemented. While regulators are monitoring this risk, it is too early to tell whether it has happened.
Banks are instead calling on phone companies and social media websites to shoulder more of the responsibility. Almost 80 per cent of push payment fraud starts online, of which 60 per cent is estimated to begin on social media, according to trade body UK Finance.
“Banking is amazingly co-operative because they know that they’re going to be held responsible in some way if they don’t,” says West. “Whereas many social media and telco companies have been absolutely not remotely co-operative, not at all helpful. I think that’s because there’s no hammer over their head.”
Banks, politicians and regulators have been increasingly vocal in criticising the tech sector’s fraud prevention efforts. Social media companies are not doing enough to stop scams, they argue. Making them liable would give them an incentive to be better at spotting and taking down fraudulent contents.
The UK’s Labour party had drafted plans to force tech companies to share liability for losses to fraud with banks before the July general election. Now, chancellor Rachel Reeves has asked social media and telecoms companies including Meta, TikTok, and BT to update ministers about progress on fraud prevention before March, with the veiled threat of further action if they fail to act.
Nathaniel Gleicher, global head of counter-fraud at Facebook owner Meta, told the FT in October that the platform was already incentivised to fight fraud because it wants to build a “safe” community for its users and risked getting fined by UK media regulator Ofcom.
Under the UK’s Online Safety Act, social media companies are obliged to take down fraudulent ads and risk fines from Ofcom if they fail to do so. Facebook, X and dating app Tinder owner Match Group are also signatories of the online fraud charter, a voluntary agreement drawn up last year between tech companies and the British government to reduce fraud.
Some lawmakers are looking at alternative ways to deter fraudsters. Massachusetts’ secretary of state William Galvin has proposed a bill that would indemnify banks in cases where they decide to delay a payment to allow for further checks. In October, British banks were granted the power to delay payments for up to 72 hours to investigate potential fraud.
However, the legislation has stalled in Massachusetts “because the banking industry, while publicly not commenting, has done everything they can to kill us”, says Galvin.
“This is the fundamental problem here, that the banks are free of responsibility,” he says. “They claim, well they’d be interrupting their customers’ business.”
While banks, politicians and others quibble over liability, the scammers’ methods are only getting more sophisticated.
AI now allows fraudsters to produce more personalised emails, advertisements and messages that are increasingly effective at fooling their targets, says Michael Jabbara, an executive on the payment fraud disruption team at Visa.
“Now there’s this level of personalisation and customisation on the fraud side that legitimate marketers would be pretty envious of,” he says.
Anna Rowe, the founder of Catch the Catfish, an advocacy group for online dating safety, says deepfakes started to crop up in online dating scams in 2022 and have become increasingly sophisticated.
Scammers posting as their victims’ romantic interests were now able to have video calls and superimpose pictures of other people’s faces on their own, she says.
“You can turn your head and it doesn’t distort, they can talk or have learnt not to stretch their mouth too much if they’re doing that, they can now put glasses on,” says Rowe. “It’s evolving really quickly.”
New defences are emerging. Reality Defender is one of a growing number of companies offering banks and others tools to detect deepfakes and prevent frauds.
In his Manhattan office, chief executive Ben Colman revels in demonstrating how easy it is to make audio and video deepfakes. He says his company’s software can rapidly catch AI-generated audio or video that would trick most human eyes and ears.
Reality Defender, whose backers include consulting firms Accenture and Booz Allen Hamilton, is at this point only offering its tools to large operations, banks and others that are trying to detect whether incoming calls are real.
The company is working on a version of its software that could be downloaded through an app store, allowing anyone with a phone to scan incoming messages for deepfakes, but until then Colman says the average consumer remains vulnerable to these and other increasingly sophisticated frauds.
“We call it deepfishing fraud,” says Colman. “AI allows what was once a one-to-one attack from a ‘foreign prince’ to be done on a massive scale.”
As more and more regular people like Christopher Pitet are ensnared in these traps, some say the calls for action in the US will only get louder.
“The numbers are just getting too big to ignore,” says John Breyault from the National Consumers League, a US advocacy group. “No matter whether I talk to the Trumpiest Republican or the [most] bleeding heart Liberal, any time we talk about fraud and scams, everybody’s got a fraud story.”