Artificial intelligence is quickly changing how we work, communicate, and solve problems. But like every powerful tool, it’s also being weaponized. Criminals are now using AI technology to scam people with realistic voice cloning, deepfake videos, convincing phishing messages, and fake websites. In this guide, we’ll explain what AI scams are, the most common forms they take, and, most importantly, how you can protect yourself.
Ugnė Zieniūtė
August 27, 2025
AI scams are fraud schemes that use artificial intelligence to deceive or manipulate victims into handing over money or sensitive information. They often rely on generative AI tools that can create lifelike voices, faces, or messages, making the scam feel personal and real.
AI scams are part of a broader rise in internet fraud, where automation and deep learning make digital deception harder to detect. Unlike traditional schemes rife with typos and poor grammar, AI-powered scams are sleek, persuasive, and often indistinguishable from legitimate communication. That’s what makes them dangerous.
Even with the latest technology, most AI-powered scams are just modern twists on old tricks. What’s new is the realism enabled by generative AI, on which many of the tricks rely. Let’s take a closer look at some of the most common and damaging types of AI scams happening today.
Among the most disturbing new phone scams are those that involve AI-generated voice impersonation. Using cloned voices, scammers pose as distressed family members or coworkers in urgent situations to trick victims into handing over money, passwords, or personal details.
A short audio sample from YouTube or a podcast, social media post, or voicemail is enough to set up an AI voice scam. Criminals then use deep learning models to generate realistic-sounding audio.
Scammers will impersonate a family member, boss, or colleague and call you in distress: “I’ve been arrested, I need bail money.” Or: “We need to wire the money today.” The voice sounds exactly like someone you trust — that’s what makes these AI-powered scams so effective.
In 2023, a mother from Arizona received a call she believed was from her 15-year-old daughter. The sobbing, panicking voice claimed she’d been kidnapped, and a demand for $1 million in ransom money followed. It all turned out to be a lie generated by a scammer using AI.
While AI-generated voice scams rely on what you hear, deepfake scams target what you see and trust. These videos look real, sound convincing, and often feature people you know, admire, or report to.
Deepfake scams use artificial intelligence to manipulate video footage. Fraudsters either superimpose someone’s face onto another person’s body or manipulate their facial expressions to match fake audio.
In 2024, a finance employee at a multinational firm in Hong Kong was tricked by a deepfake video call where the CFO appeared to be live on camera. The deepfake was good enough to convince him to transfer $25 million to the scammers.
Some of the most convincing scams come in the form of fake websites. They look legitimate, offer real-looking products or services, and include glowing reviews or live chat support. However, they’re designed to steal your personal information.
Scammers now use AI to quickly build websites that mirror real brands. These fake sites may sell counterfeit goods, offer phony giveaways, or prompt you to log in to what looks like your bank, email, or a government portal.
Once you enter your details, scammers steal the data to commit financial fraud or sell it on the dark web. These fake websites may lead to identity theft cases, especially if you unknowingly enter your password, Social Security number, or credit card information.
In 2024, Booking.com reported a 900% surge in travel-related scams, much of it driven by the rise of generative AI. Criminals have been creating fake versions of travel and accommodation websites, complete with AI-generated images, listings, and customer reviews.
Social media has become a prime hunting ground for AI-driven scams. With millions of users and a steady flow of personal content, platforms like Facebook, Instagram, and TikTok give scammers exactly what they need: attention, trust, and reach.
Scammers use AI to generate realistic social media profiles, comments, and videos. These scams vary widely, but some of the most common include:
AI makes it easy to produce content that feels personal and legitimate, especially in fast-scrolling environments where users are less likely to pause and verify information.
Meta warned in early 2024 that AI scams on Facebook using fake celebrity endorsements had become widespread. In one scam, fake Mark Cuban videos were used to promote a fraudulent investment platform. Victims were tricked into depositing money and lost over $8 million collectively.
Investment scams have always preyed on people’s desire for financial security or fast profits. With the rise of AI, scammers have new tools to make these schemes look even more credible.
These scams often center around AI trading bots or automated investment platforms that claim to use advanced algorithms to beat the market. Scammers promote these tools through deepfake videos of well-known investors or influencers, often claiming endorsements that were never given.
Once users are hooked, the scam sites display fake dashboards that show growing profits in real time to encourage more deposits.
A scam platform called Quantum AI falsely claimed Elon Musk endorsed it. Victims deposited thousands, thinking they were investing in a next-gen trading system. In the UK alone, this scam cost people £2 million.
Gone are the days of phishing scams impaired by poor grammar and obvious red flags. AI is now helping scammers craft much more convincing phishing emails.
Phishing emails are designed to trick you into downloading malware or sharing your personal information, such as your Social Security number, financial information, or usernames and passwords. Generative AI tools can now create flawless phishing emails and texts in any language, impersonating companies, banks, a family member, or even your employer.
Some are tailored using personal data scraped from public profiles. Others use tone-matching tools to mimic your writing style. They represent a new frontier in social engineering and one of the most dangerous types of phishing now circulating.
A US-based CEO received an email from what looked like their head of finance asking for an urgent wire transfer. The message looked authentic — it used correct formatting, familiar phrasing, and referenced current company projects. It was written by AI and resulted in a $137,000 loss.
While scams powered by artificial intelligence are smart, they’re not invisible. You can learn to recognize the common warning signs of AI scams before it’s too late:
If something seems suspicious, take these steps:
If you’ve been scammed, don’t panic. Acting fast can limit the damage, improve your chances of recovering funds, and protect others.
Follow this action plan if you have fallen for an AI scam:
Scams powered by AI technology aren’t going away, but with the right habits and precautions, you can make yourself a much harder target. The key is to stay alert, think critically, and follow a few simple precautions:
Ugnė is a content manager focused on cybersecurity topics such as identity theft, online privacy, and fraud prevention. She works to make digital safety easy to understand and act on.
The credit scores provided are based on the VantageScore 3.0® credit score by TransUnion® model. Lenders use a variety of credit scores and may utilize a different scoring model from VantageScore 3.0® credit score to assess your creditworthiness.
You have numerous rights under the FCRA, including the right to dispute inaccurate information in your credit report(s). Consumer reporting agencies are required to investigate and respond to your dispute but are not obligated to change or remove accurate information that is reported in compliance with applicable law. While this plan can provide you assistance in filing a dispute, the FCRA allows you to file a dispute for free with a consumer reporting agency without the assistance of a third party.
No single product can fully prevent identity theft or monitor every single transaction.
Some features may require authentication and a valid Social Security Number to activate. To access credit reports, scores, and/or credit monitoring services (“Credit Monitoring Services”), you must successfully pass your identity authentication with TransUnion®, and your VantageScore 3.0® credit score file must contain sufficient credit history information. If either of these requirements is not met, you will not be able to access our Credit Monitoring Services. It may take a few days for credit monitoring to start after a successful enrollment.
NordProtect's dark web monitoring service scans various sources where users' compromised personal information is suspected of being published or leaked, with new sources added frequently. However, there is no guarantee that NordProtect will locate and monitor every possible site or directory where consumers' compromised personal information is leaked or published. Accordingly, we may not be able to notify you of all your personal information that may have been compromised.
Identity and cyber protection benefits are available to customers residing in the U.S., including U.S. territories and the District of Columbia, with the exception of residents of New York and Washington. Benefits under the Master Policy are issued and covered by HSB Specialty Insurance Company. You can find further details and exclusions in the summary of benefits.
Our identity theft restoration service is part of a comprehensive identity theft recovery package that offers a reimbursement of up to $1 million for identity recovery expenses. To access the support of an identity restoration case manager, you must file a claim with HSB, which NordProtect has partnered with to provide the coverage. HSB is a global specialty insurance company and one of the largest cyber insurance writers in the U.S.