A Generational Threat To Accounts Payable: AI-Driven Social Engineering

Mar 14, 2025

Social Engineering in the age of AI

Social engineering is by no means new. It’s as old as dial-up, a constant creeping threat lurking in your inbox often masquerading as an urgent message from the CEO or a request from the IT department. 

Most employees in a B2B environment have spam folders filled with poorly written cap-locked messages. It’s routine, almost laughable, relegated to the dustbin of tired old scams.

However, thanks to generative artificial intelligence (GAI) and large language models (LLMs), social engineering has gotten a facelift. Careless humans no longer design phishing attempts; instead, they’re generated by algorithms that continuously learn from and adapt to their victims to deceive with uncanny precision.

How It works

AI-led social engineering takes deception to a completely new level. It can study its target, whether an individual employee, a business, or an organisation, and craft personalised attacks. Through social machine learning, AI systems learn from past interactions, continuously refining tactics to become more convincing. 

By analysing datasets such as emails, social media profiles, and company financials, AI can generate realistic and targeted communications that are remarkably hard to differentiate from genuine messages.

The risk to Accounts Payable

Much like regular scams, the accounts payable process is particularly vulnerable to attacks. Responsibility for handling large financial transactions and frequent interactions with external vendors. Manual payment processes and reconciliation leave room for human error, making AP a prime target. 

Let’s take a look at some of the most common deceptions to look out for:

AI Vendor Fraud

Imagine a scammer using AI to monitor a company’s internal communications, gathering context about vendors, employees, and financial processes. The result? An email that appears to be from a trusted supplier, complete with familiar logos, signatures, and even tailored payment instructions. If the victim doesn’t spot the small details or fails to ask the right questions, the scammer might walk away with hundreds of thousands.

Audio and Video Deepfakes

AI can also enhance attacks with deepfake technology—think video or audio impersonations that mimic the voice or face of key personnel in the organisation. Gone are the days of clumsy fake calls; now, it’s your boss asking for a transfer, and the technology behind it is so convincing that even the sharpest eye might fall prey. For example, a finance worker in Hong Kong was scammed into sending $25 million after scammers impersonated the CEO on a video conference call.

Automated Spearphishing (targeted attacks)

Unlike traditional phishing, which targets large groups, AI spearphishing focuses on specific individuals within an organisation. By analysing publicly available data, attackers can craft messages that feel genuine and tailored to the victim. 

This increases the likelihood of the victim falling for the scam, which aims to steal sensitive information like login details or financial data. A recent study by the Harvard Kennedy School found that over 50% of those target by spearphishing couldn’t differentiate between AI and real people. 

How to Tackle AI-Led Social Engineering

The next generation of business theft will be done without breaking a single line of code. It’ll be done through digital deception, carefully crafted to deceive employees. But AI isn’t perfect and we can take steps to prevent disaster. 

Employee training

Employees are the first and last line of defense against AI. Educating employees on the common signs of a scam, such as unexpected payment requests, and suspicious email addresses. If AI is continuously learning, so must you. Staying up-to-date on what threats AI can pose is key.  

Multi-Factor Authentication (MFA)

A standard protection practice but often overlooked or deemed unnecessary. MFA adds a layer of protection against unauthorised access. Even if a fraudster can mimic an email or phone call, MFA ensures they cannot access sensitive information without the proper access. 

Fight Fire with Fire

The same technology powering such scams can also be used to defend against them. Companies can use AI-powered systems to analyse email patterns, identify strange behaviours, and detect deepfakes before doing any real damage.

Double-check through Different Channels

Unexpected payment requests? Pick up the phone or message the person directly on a different platform or even walk over to their desk. Cross-channel verification makes it harder for scammers to impersonate personnel. 

Monitor Financial Transactions

Set up safeguards that track unusual or high-risk financial transfers. AI can help analyse patterns and flag irregularities before it’s too late.

About Pax2Pay

Although the best defense is taking necessary steps to educate your employees, Pax2Pay virtual cards can provide an extra layer of security, from single use prepaid virtual cards, to cards locked to specific suppliers. Get in touch to see how we can help safeguard your accounts payable.