E-commerce fraud is expected to increase in the next five years thanks to AI, and merchants are advised to respond with AI.
Juniper Research, a consultancy based in Hampshire, UK, released a report on Monday predicting that the value of e-commerce fraud will rise from $44.3 billion in 2024 to $107 billion in 2029, a 141 percent increase.
The company says AI tools are allowing fraudsters to outsmart security measures and have enabled attacks of greater sophistication, scale and frequency. This can be automated to easily create fake accounts and artificial identities to defraud merchants. And it is claimed that these attacks can overwhelm rule-based prevention systems.
“E-commerce merchants should look to integrate fraud prevention systems that offer artificial intelligence capabilities to quickly identify emerging tactics,” Thomas Wilson, the report’s author, said in a statement. “This is especially true in developed markets, where larger merchants are at risk.” “There are more, it’s going to be important to be targeted for fraud, like testing stolen credit cards.”
The potential for artificial intelligence to help construct credible frauds has become a matter of public concern. In May, California Attorney General Rob Bonta warned Californians about AI-based scams that rely on “deepfakes” to impersonate family members and government officials. And the FTC last month announced Operation AI Comply, five legal actions against companies that make exaggerated AI claims or sell AI technology that can be used to deceive.
Academics studying AI safety have also sounded the alarm about AI’s deceptive potential. Last year, in a preprint paper, researchers from MIT, the Australian Catholic University and the Center for Artificial Intelligence Safety said: “Various artificial intelligence systems have learned to deceive humans. This capability poses a risk. But this risk can be mitigated by applying strict regulatory standards to AI systems that are capable of deception and by developing technical tools to prevent AI deception.”
Political leaders, however, have rejected stricter regulatory standards due to concerns about economic harm. Last month in California, for example, Governor Gavin Newsom refused to sign SB 1047, which is seen as one of the most sweeping attempts at AI legislation to date. Artificial intelligence companies lobbied against the bill.
However, other proposed AI-related legislation aimed at addressing AI-based fraud, such as the No AI Fraud Act, is pending approval by US lawmakers. The European AI Act, a comprehensive legal framework for AI transparency and accountability, came into effect in August, and most of its provisions will be in place until August 2026.
Juniper’s contribution to these concerns includes urging merchants to fight fire with fire, for example, as AI fraud detection mechanisms can help address first-party fraud — when customers knowingly defraud merchants for personal gain — and other forms of fraud. be useful “For example, AI can identify unusual spending patterns, unexpected changes in customer behavior, or multiple accounts linked to a device,” the company explains in a white paper.
However, there are drawbacks: a lot of data is required, and the infrastructure and talent needed to run these systems are expensive. Also, artificial intelligence fraud detection may produce false positives. Juniper Research explains: “Genuine customers using unfamiliar browsers and VPNs (virtual private networks) are likely to be flagged as fraudulent users; This leads to a decrease in customer satisfaction and loss of revenue for the merchant.
Furthermore, the artificial intelligence involved – machine learning – often behaves in ways that are not easily explained, which makes it difficult to improve fraud prediction algorithms based on observed errors.
However, Juniper’s answer to AI is more AI, which doesn’t seem to end well. ®
#extensively #ecommerce #combat #fraud