Cybercrime: AI-driven corporate fraud
How to protect yourself against the cutting edge of cybercrime
While artificial intelligence (AI) apps have increased some efficiencies, they’ve also proven to be a double-edged sword. Following the widespread adoption of popular generative artificial intelligence (GAI) tools like ChatGPT and Midjourney, U.S. banks recorded an unprecedented spike in losses—from $767 million in 2022 to $1.3 billion by the close of 2023—a large portion of which have been attributed to AI-driven fraud.1
AI-driven business fraud occurs when scammers feed legitimate communications from corporate targets into a generative AI app that enhances the scammers’ ability to commit crimes. These apps then provide either cutting-edge malware that batters your cyber defenses or customized, detailed impersonations of your corporate communications that increase scammers’ chances of successfully committing social engineering fraud.2
AI supercharges the stealth, speed, and accuracy of hacking attempts and familiar types of cyber fraud, such as phishing, vishing, and smishing. This boosted efficiency emboldens scammers to increase the range and frequency of their cyberattacks—making any employee within your company the potential target of an onslaught of sophisticated schemes and hacks.
Case study: McAfee unveils AI deepfake detection technology.
At the 2024 Consumer Electronics Show (CES), cybersecurity provider McAfee unveiled what could be a quantum leap forward in combating generative AI-driven voice phishing. Project Mockingbird uses AI-powered behavioral, contextual, and categorical detection models to spot AI-generated voice clones. While still in beta form, it impressed show attendees with its 90% accuracy detection rate.4
Best practices and prevention
When it comes to combating supercharged cybercrimes, your best defense is to mirror the know-how of scammers while adopting the very tech they’re using against you. Here are some practical steps to help you accomplish both.
Refresh your memory on red flags.
AI-driven business fraud can make certain hallmarks of threat detection (like watching for grammatical errors in phishing emails) more difficult to spot. But unsolicited emails or texts requesting sensitive information are still red flags. Similarly, even with the added power of generative AI tools, brute force attacks can only get so far when strong password creation protocols and regular password hygiene are in place.
Make double-checking identities a top priority.
The simple step of double-checking an identity is enough to foil most AI-driven business fraud tactics like deepfakes and voice cloning. If a person or institution asks for sensitive information, the recipient should confirm their identity first. This tactic is further strengthened when paired with a multiperson approval process for financial transactions, such as wire or ACH transfers.
Upgrade your tech.
Whether supplying your security team with them or hiring the right group of professionals to help install and manage them, tools powered by generative AI and machine learning (ML) are crucial to securing yourself against this cutting-edge threat. They can strengthen security analysis, identify threats proactively, bolster threat management, and flag even the stealthiest malware attempts.
Talk to Truist.
Truist will never call or email you to ask for your account information. If a caller asking for this info claims to be from Truist, hang up and call 888-228-6654. In conjunction with our fraud prevention specialists, your relationship manager can put you in touch with professionals who will help spot and halt deceptions before they threaten your business.
FAQ on on AI-driven business fraud