The most notorious AI scams and how to protect yourself from them

Artificial intelligence is quickly changing how we work, communicate, and solve problems. But like every powerful tool, it’s also being weaponized. Criminals are now using AI technology to scam people with realistic voice cloning, deepfake videos, convincing phishing messages, and fake websites. In this guide, we’ll explain what AI scams are, the most common forms they take, and, most importantly, how you can protect yourself.

Author image

Ugnė Zieniūtė

August 27, 2025

9 min read

What are AI scams?

AI scams are fraud schemes that use artificial intelligence to deceive or manipulate victims into handing over money or sensitive information. They often rely on generative AI tools that can create lifelike voices, faces, or messages, making the scam feel personal and real.

AI scams are part of a broader rise in internet fraud, where automation and deep learning make digital deception harder to detect. Unlike traditional schemes rife with typos and poor grammar, AI-powered scams are sleek, persuasive, and often indistinguishable from legitimate communication. That’s what makes them dangerous.

Types of AI scams and how to recognize them

Even with the latest technology, most AI-powered scams are just modern twists on old tricks. What’s new is the realism enabled by generative AI, on which many of the tricks rely. Let’s take a closer look at some of the most common and damaging types of AI scams happening today.

AI voice cloning scams

Among the most disturbing new phone scams are those that involve AI-generated voice impersonation. Using cloned voices, scammers pose as distressed family members or coworkers in urgent situations to trick victims into handing over money, passwords, or personal details.

How they work

A short audio sample from YouTube or a podcast, social media post, or voicemail is enough to set up an AI voice scam. Criminals then use deep learning models to generate realistic-sounding audio.

Scammers will impersonate a family member, boss, or colleague and call you in distress: “I’ve been arrested, I need bail money.” Or: “We need to wire the money today.” The voice sounds exactly like someone you trust — that’s what makes these AI-powered scams so effective.

Example

In 2023, a mother from Arizona received a call she believed was from her 15-year-old daughter. The sobbing, panicking voice claimed she’d been kidnapped, and  a demand for  $1 million in ransom money followed. It all turned out to be a lie generated by a scammer using AI. 

Deepfake video scams 

While AI-generated voice scams rely on what you hear, deepfake scams target what you see and trust. These videos look real, sound convincing, and often feature people you know, admire, or report to.

How they work

Deepfake scams use artificial intelligence to manipulate video footage. Fraudsters either superimpose someone’s face onto another person’s body or manipulate their facial expressions to match fake audio.

Example

In 2024, a finance employee at a multinational firm in Hong Kong was tricked by a deepfake video call where the CFO appeared to be live on camera. The deepfake was good enough to convince him to transfer $25 million to the scammers.

Fake AI-generated websites 

Some of the most convincing scams come in the form of fake websites. They look legitimate, offer real-looking products or services, and include glowing reviews or live chat support. However, they’re designed to steal your personal information.

How they work

Scammers now use AI to quickly build websites that mirror real brands. These fake sites may sell counterfeit goods, offer phony giveaways, or prompt you to log in to what looks like your bank, email, or a government portal.

Once you enter your details, scammers steal the data to commit financial fraud or sell it on the dark web. These fake websites may lead to identity theft cases, especially if you unknowingly enter your password, Social Security number, or credit card information.

Example

In 2024, Booking.com reported a 900% surge in travel-related scams, much of it driven by the rise of generative AI. Criminals have been creating fake versions of travel and accommodation websites, complete with AI-generated images, listings, and customer reviews.

AI scams on Facebook and other socials

Social media has become a prime hunting ground for AI-driven scams. With millions of users and a steady flow of personal content, platforms like Facebook, Instagram, and TikTok give scammers exactly what they need: attention, trust, and reach.

How they work

Scammers use AI to generate realistic social media profiles, comments, and videos. These scams vary widely, but some of the most common include:

  • Fake celebrity giveaways (e.g., “Elon Musk is giving away crypto!”).
  • Romance scams using AI-generated faces and chats.
  • Impersonation of small business owners or local officials.

AI makes it easy to produce content that feels personal and legitimate, especially in fast-scrolling environments where users are less likely to pause and verify information.

Example

Meta warned in early 2024 that AI scams on Facebook using fake celebrity endorsements had become widespread. In one scam, fake Mark Cuban videos were used to promote a fraudulent investment platform. Victims were tricked into depositing money and lost over $8 million collectively.

AI-powered investment scams and crypto scams 

Investment scams have always preyed on people’s desire for financial security or fast profits. With the rise of AI, scammers have new tools to make these schemes look even more credible.

How they work

These scams often center around AI trading bots or automated investment platforms that claim to use advanced algorithms to beat the market. Scammers promote these tools through deepfake videos of well-known investors or influencers, often claiming endorsements that were never given.

Once users are hooked, the scam sites display fake dashboards that show growing profits in real time to encourage more deposits.

Example

A scam platform called Quantum AI falsely claimed Elon Musk endorsed it. Victims deposited thousands, thinking they were investing in a next-gen trading system. In the UK alone, this scam cost people £2 million.

AI-generated phishing emails and texts

Gone are the days of phishing scams impaired by poor grammar and obvious red flags. AI is now helping scammers craft much more convincing phishing emails.

How they work

Phishing emails are designed to trick you into downloading malware or sharing your personal information, such as your Social Security number, financial information, or usernames and passwords. Generative AI tools can now create flawless phishing emails and texts in any language, impersonating companies, banks, a family member, or even your employer.

Some are tailored using personal data scraped from public profiles. Others use tone-matching tools to mimic your writing style. They represent a new frontier in social engineering and one of the most dangerous types of phishing now circulating.

Example

A US-based CEO received an email from what looked like their head of finance asking for an urgent wire transfer. The message looked authentic — it used correct formatting, familiar phrasing, and referenced current company projects. It was written by AI and resulted in a $137,000 loss.

How to spot AI scams early

While scams powered by artificial intelligence are smart, they’re not invisible. You can learn to recognize the common warning signs of AI scams before it’s too late:

  • Urgency. Scammers want you to act before you think. If someone’s pushing you to move fast, slow down.
  • Uncommon payment methods. Requests for payment in cryptocurrency, gift cards, wire transfers, or prepaid debit cards are almost always a sign of fraud.
  • Unusual requests for personal information. Be cautious if someone contacts you unexpectedly and asks for details like your Social Security number or bank account information.
  • Inconsistent details. AI can clone a voice, but it can’t always get context right. Ask unexpected questions.
  • Unusual language. Even well-written AI text may feel “off.” Trust your gut.
  • You’re discouraged from double-checking. If someone insists you don’t contact others or avoid “going through official channels,” that’s a sign they don’t want their story scrutinized.

If something seems suspicious, take these steps:

  • Pause before reacting to any emotionally charged message or call.
  • Verify the identity of the person contacting you. Don’t call back the same number — use a number you trust or try reaching them through another family member or coworker.
  • Search for parts of the message online. Many AI scams follow specific templates.
  • Watch for small irregularities in speech, facial movement, or video quality, especially in phone calls or recorded messages. These subtle signs can indicate the use of deepfake technology.
  • Use caller ID and spam filters, but don’t rely on them fully.
  • Educate older relatives and kids on the latest AI scams and security measures.

What to do if you have fallen for an AI scam

If you’ve been scammed, don’t panic. Acting fast can limit the damage, improve your chances of recovering funds, and protect others.

Follow this action plan if you have fallen for an AI scam:

  • Contact your bank or credit card provider immediately. Ask them to freeze your account and stop or reverse any suspicious transactions.
  • Report identity theft to your local fraud authority (e.g., the Federal Trade Commission, Action Fraud UK, etc.). Reporting scams helps officials track new scam tactics, warn others, and pursue investigations.
  • Protect your credit. Add a fraud alert to your credit reports. This feature tells lenders to take extra steps to verify your identity before opening new accounts. In the US, placing an alert with one bureau (Equifax, Experian, or TransUnion) automatically notifies the others.
  • Secure your accounts. Change your passwords, especially for banking, email, and cloud storage. Monitor your financial accounts carefully for any unusual activity. 
  • Enable 2FA (two-factor authentication) across all important platforms.
  • Document everything, including screenshots, phone numbers, and emails. You’ll need this information for reports or recovery.
  • Warn others. If the scam involved impersonation, notify your contacts. A quick heads-up can prevent others from being tricked the same way.

How to avoid AI scams and protect yourself from them

Scams powered by AI technology aren’t going away, but with the right habits and precautions, you can make yourself a much harder target. The key is to stay alert, think critically, and follow a few simple precautions:

  • Use a callback code. Agree with family members on a phrase only you know. If you get a suspicious call, ask for the code.
  • Keep sensitive information private. Never share account numbers, passwords, one-time codes, or ID documents through email, text, or messaging apps. Financial institutions will never ask for these details this way.
  • Be careful with your voice online. Posting voice notes or videos publicly can give scammers the material they need for voice cloning. Use privacy settings and think twice before sharing content with identifying information.
  • Verify identity visually. When possible, ask for a live video or in-person confirmation.
  • Don’t overshare online. The more data is available, the easier it is for scammers to mimic you.
  • Keep your systems updated. Many scams rely on exploiting old software vulnerabilities.

FAQ

What precaution is suggested to protect against AI-generated phone scams?

To avoid AI phone scams, set up a family “safe word” or callback question that only your real loved ones would know. Always verify requests (especially for money or sensitive data) through another channel.

How do I detect AI scams?

To detect AI-powered scams, look for urgency, payment via untraceable methods, poor contextual awareness, or oddly perfect language. Verify every suspicious message through known channels before acting.

Are deepfakes hard to detect?

Yes, especially for non-experts. While some deepfakes still have tell-tale glitches (unnatural blinking or lip-sync issues), many are now nearly flawless. That’s why verification is critical.

How many people have been scammed using AI?

Exact numbers are hard to track, but global losses from AI-related scams are estimated to be in the billions of dollars annually. The FBI, Europol, and private security firms all report a sharp rise in voice and deepfake scams powered by artificial intelligence.
Author image
Ugnė Zieniūtė

Ugnė is a content manager focused on cybersecurity topics such as identity theft, online privacy, and fraud prevention. She works to make digital safety easy to understand and act on.