Fraud has always been a numbers game for criminals. What AI changes is the scale at which they can play it. Attacks that once required technical expertise can now be run by anyone with access to a generative AI tool, and the results are landing in California inboxes, bank accounts, and hiring pipelines with increasing frequency.
How AI Has Changed the Attack Playbook
The most significant shift is not the appearance of new attack types. Phishing, impersonation, and website fraud are not new. What AI has done is remove the barriers that previously kept those attacks crude and limited.
Experian’s chief innovation officer for fraud and identity described the shift directly: AI has made it possible for criminals with less expertise to create more convincing scams and deploy them at far greater volume. A fraudster who previously needed a team to run a phishing campaign can now automate the same operation with a fraction of the effort.
Experian’s 2026 Future of Fraud Forecast found that nearly 60% of companies reported an increase in fraud losses between 2024 and 2025. A separate survey, Experian’s 2025 U.S. Identity and Fraud Report, found that 72% of business leaders expect AI-enabled fraud and deepfakes to be among their top operational challenges this year. Together, the two reports paint a consistent picture: fraud is getting more costly, and the businesses being asked to deal with it know what is driving that.
Three AI Fraud Threats Businesses Need to Know About
The 2026 forecast identifies five major fraud trends, three of which are particularly relevant for businesses with employees, vendors, and customer-facing digital systems.
Deepfake Impersonation
Generative AI tools can produce convincing audio and video of real people, including executives, vendors, and job candidates. Experian forecasts that employment fraud will escalate in the remote workforce, with AI-generated candidates capable of passing interviews in real time and being granted access to sensitive internal systems after being unknowingly hired. Beyond hiring, the same deepfake technology is being used to impersonate executives on video calls, providing fraudulent authorization for wire transfers.
Cloned Websites
AI has made it significantly cheaper and faster to replicate a legitimate business’s website down to its branding, layout, and content. Even after takedown requests, spoofed domains continue to resurface, forcing businesses to manage the threat continuously rather than resolving it once. These sites are used to harvest login credentials and payment details from customers who have no reason to suspect the page is fake.
AI-Generated Phishing
The phishing emails arriving in staff inboxes today bear little resemblance to the obviously suspicious messages of a few years ago. AI-generated messages arrive with correct grammar, plausible sender addresses, and personalized references drawn from publicly available information. Employees trained to spot spelling errors and strange formatting are encountering something different now.
Why California Businesses Face Heightened Exposure
Digital dependency and scale make California a prime target. The FBI’s 2024 Internet Crime Report ranked California first in the nation for internet crime complaints, with residents and businesses reporting more than $2.5 billion in losses. The top three cybercrimes by complaint volume were cryptocurrency fraud, extortion, and phishing and spoofing. The pattern holds closer to home too. As we covered in our post on the San Joaquin Superior Court data breach, no region is exempt and no organization is too small to be a target.
Nationally, cybercrime losses reached $16.6 billion in 2024, a 33% increase from the prior year. The number of complaints did not grow at the same rate, which means individual incidents are becoming more costly. When attacks are more believable and more precisely targeted, the average loss per incident rises.
Small and mid-sized businesses are not insulated from this. Automated AI fraud tools are designed to probe widely, and a company with accessible systems and limited security controls is a straightforward target regardless of its size.
The Real Cost When an Attack Gets Through
For California businesses, the financial exposure from an AI fraud incident extends well beyond any direct theft.
A phishing attack that results in a data breach triggers notification requirements under California law. Depending on the type of data exposed, that means legal fees, regulatory review, and customer notification costs that stack up regardless of the original loss amount. A deepfake scam that convinces a finance employee to authorize a wire transfer can result in funds that are effectively unrecoverable once processed. We go into the full cost breakdown in more detail in our guide to data breach prevention for Bakersfield businesses.
Reputational damage is harder to quantify and slower to resolve. Customers and partners who lose confidence in a business following a security incident are difficult to retain, and rebuilding that trust takes time most small businesses do not have in abundance.
Strengthening Cybersecurity Against AI Threats
Effective cybersecurity defenses against AI-powered fraud do not require a large in-house security team, but they do require deliberate choices made before an incident occurs, not after.
Multi-factor authentication (MFA): Adding a second verification step at login creates friction that automated attacks routinely fail to clear. It remains one of the highest-value controls a business can put in place relative to cost and implementation time.
Updated security awareness training: Phishing training built around spotting formatting errors and suspicious links is no longer sufficient. Effective training now focuses on verification habits and behavioral signals, teaching staff to confirm requests through known channels rather than trusting the message at face value.
Proactive monitoring: Catching unusual account or network activity before a breach is confirmed is significantly more cost-effective than responding after the fact. As we outlined in our post on what Bakersfield businesses gain from proactive IT support, having eyes on your systems continuously is what separates businesses that contain incidents quickly from those that don’t find out until the damage is done.
Verification protocols for financial requests: Given the rise of deepfake impersonation, any financial request received by email or phone should require a callback to a contact number that has been independently verified, not one included in the message itself.
Grapevine MSP works with businesses across Bakersfield and the wider San Joaquin Valley to put these protections in place. Reach out to our team to understand where your current security posture stands and what steps make the most sense for your business.
FAQ
Are small businesses really targeted by AI fraud, or is it mainly larger companies?
Small and mid-sized businesses are targeted at a disproportionately high rate. They typically hold valuable data and funds but have fewer security controls than larger organizations, making them straightforward targets for automated attacks that probe thousands of businesses simultaneously.
What is a deepfake, and how is it being used in business fraud?
A deepfake is AI-generated audio or video that realistically imitates a real person. In a business context, fraudsters use deepfake technology to impersonate executives on video calls or voice messages, often to authorize fraudulent wire transfers or extract sensitive information from employees.

