• Home  
  • Social Media Scams Cost $2.1B in 2025
- Cybersecurity

Social Media Scams Cost $2.1B in 2025

Americans lost at least $2.1 billion to social media scams in 2025, an eightfold jump since 2020, with investment scams leading the surge. FTC data reveals growing platform risks.

Social Media Scams Cost $2.1B in 2025

Americans lost at least $2.1 billion to scams that originated on social media in 2025, according to the Federal Trade Commission. That’s an eightfold increase from 2020—and it’s not just a number. It’s a signal that the social web has become a hostile environment for everyday users, one where trust is weaponized at scale.

Key Takeaways

  • The FTC reports $2.1 billion in consumer losses from social media scams in 2025—an eightfold rise since 2020.
  • Investment scams were the most costly, with $1.1 billion lost, often starting with fake educational ads.
  • Over 40 percent of fraud reports involved impersonation of real people, including family members and public figures.
  • Meta platforms accounted for the majority of reported scam origins, though the FTC did not name services explicitly.
  • Victims aged 30–39 reported the highest median losses, but those over 60 lost larger amounts per incident.

The Scam Economy Is Now Platform-Native

There’s a new default setting on social media: assume everything wants your money. The FTC’s data isn’t just about greed or gullibility. It’s about design. Scams aren’t breaking into these platforms—they’re using them exactly as intended. The same ad targeting, engagement algorithms, and rapid content distribution that fuel legitimate commerce are now optimized for fraud.

Consider the typical investment scam. A user sees a post or ad claiming to teach them how to invest in crypto, real estate, or stocks. It looks like an educational funnel—free webinar, sign-up form, follow-up messages. But instead of lessons, they’re pushed into private groups where fake testimonials and fabricated profits build false confidence. Then, they’re urged to send money to a “manager” who vanishes after the first transfer.

These aren’t fringe operations. They run on ad budgets, use professional video editing, and deploy chatbots to scale outreach. And they exploit a blind spot: social platforms treat them as commercial activity, not criminal behavior—until someone reports them.

Impersonation Is the New Phishing

Fake investment schemes are only half the story. The FTC notes that more than 40 percent of social media scam reports involved impersonation. That’s not just fake Elon Musk accounts promising double-your-money giveaways. It’s scammers cloning the identity of real people—friends, relatives, coworkers—using photos, bios, and post history scraped from public profiles.

In some cases, attackers use AI-generated voice clips or deepfake video to confirm their fake identity in direct messages. One victim reported receiving a video call from what appeared to be their sister, pleading for urgent money due to a “car accident abroad.” The voice was right. The face moved naturally. The background was generic but plausible. Only later did they realize the lighting didn’t match the time zone.

Why Verification Isn’t Working

Platforms have verification systems, but they’re meaningless in this context. A blue check doesn’t stop a scammer from cloning a verified account and messaging your contacts. Worse, verification can be weaponized—scammers pay for official-looking badges or exploit loopholes in verification programs to appear legitimate.

And users? They’re trained to trust signals like profile completeness, friend connections, and post frequency. All of which are now replicable at scale. You don’t need to hack an account. You just need to mirror it.

The $1.1 Billion Lie in Plain Sight

Of the $2.1 billion lost, $1.1 billion came from investment scams—a category that barely registered in FTC reports before 2020. That’s not just growth. It’s a takeover.

These scams often begin with content that wouldn’t violate any community guideline: motivational quotes, stock charts, success stories. They’re indistinguishable from the noise of influencer culture. The pitch doesn’t come in the post—it comes in the DM, after engagement has built false trust.

This matters because it exposes a regulatory gap. The FTC can track losses. It can warn consumers. But it can’t force platforms to redesign their messaging systems, limit automated contact, or restrict ad-funded lead gen funnels that pivot to fraud. The infrastructure enabling these scams is legal, profitable, and largely unregulated.

  • Reported losses from social media scams: $2.1 billion (2025)
  • Annual increase since 2020: 8x
  • Largest category: investment scams ($1.1 billion in losses)
  • Most common tactic: impersonation (40%+ of reports)
  • Median loss for people over 60: $1,400

Meta’s Silent Liability

The FTC report doesn’t name specific platforms. But internal data reviewed by Engadget and other outlets indicates that Meta-owned services—Facebook, Instagram, and Messenger—were the origin point for over 60 percent of reported scams. That’s not surprising. These platforms host the largest ad-driven social ecosystems, with the most sophisticated targeting tools and the deepest integration of commerce features.

Yet Meta’s response has been limited to automated detection and user reporting tools—measures that lag behind real-time fraud. Worse, the company profits from the very ad campaigns that seed these scams. An ad promoting a “free investing course” might be perfectly clean when reviewed. But if it leads to a phishing site or a scam group, Meta’s systems often don’t catch it until thousands have engaged.

And enforcement is reactive. Meta doesn’t block entire scam networks—just individual accounts. When one is taken down, five more pop up under slightly altered names. It’s whack-a-mole with a $2.1 billion prize.

The Role of AI in Fraud Escalation

Generative AI isn’t just accelerating scam creation—it’s making detection harder. Tools like voice synthesis models from ElevenLabs and image generators like MidJourney allow fraudsters to produce convincing replicas of real people with minimal technical skill. In 2025, the FTC documented over 1,800 cases involving AI-generated media in impersonation scams, a 300 percent jump from 2023.

These tools are often used in tandem. A scammer might use a stolen LinkedIn photo to clone a manager’s identity, then deploy a voice model trained on YouTube clips to impersonate them in a Zoom call requesting urgent payroll changes. Banks like JPMorgan and credit unions across Ohio and Texas reported spikes in business email compromise (BEC) cases tied to social media-sourced data and synthetic media.

What makes this especially dangerous is accessibility. Many of these AI tools are hosted on platforms with lax identity checks and payment methods that accept anonymous cryptocurrency. A $20 subscription can generate dozens of high-fidelity clones. And since the content isn’t hosted on social platforms directly—just the initial contact—the liability remains murky.

Meanwhile, Meta and Google have invested in AI detection tools, like watermarking for synthetic media. But adoption is inconsistent. Facebook’s AI classifier flags only 37 percent of deepfake videos proactively, according to internal performance metrics obtained by The Markup. The rest are caught only after user reports or viral spread.

What Competitors Are (and Aren’t) Doing

Meta isn’t alone in facing this crisis, but its scale makes it a prime target. Other platforms have taken more targeted approaches. X (formerly Twitter), for instance, reduced automated DMs to non-followers after a 2024 surge in crypto scams originating from verified accounts. The change, introduced in Q2 2025, cut scam reports on X by 27 percent over six months, according to company data.

Snapchat implemented end-to-end encryption for all chats in 2024, but also introduced behavioral detection systems that flag rapid-fire friend requests or sudden spikes in contact invitations. If a user adds 50 friends in under two hours, the account is temporarily rate-limited. TikTok, meanwhile, partnered with fintech firm Stripe in 2024 to monitor in-app payments and flag suspicious financial content. Their joint moderation team analyzes over 2 million videos monthly for scam patterns, using a mix of AI classifiers and human reviewers.

Still, none have addressed the ad-to-DM pipeline. YouTube Shorts and TikTok Ads continue to allow lead-generation campaigns that collect user data for “financial coaching” with minimal pre-approval. A 2025 investigation by ProPublica found that identical scam funnels—promising “passive income with AI trading bots”—ran on both platforms using different advertiser IDs. Google pulled 12,000 such ads after the report, but 8,000 reappeared under new accounts within 72 hours.

The ad review process remains largely automated. Meta’s ad library shows that 98 percent of financial service ads are approved within 15 minutes of submission. Human review kicks in only after multiple user complaints. By then, the scam has often reached tens of thousands.

What This Means For You

If you’re building apps that integrate social logins, messaging APIs, or ad-driven lead funnels, you’re not just building features—you’re building attack surfaces. Assume any identity in your system can be cloned. Assume any user-to-user message could be fraudulent. Assume any ad-driven conversion path will be exploited.

That means baking in rate limits on cross-user contact, requiring step-up verification for financial actions, and auditing third-party integrations that pull in social data. It also means designing for reversibility: if a user sends money based on a fake message, can it be traced? Can it be paused? Can the interaction be reviewed automatically? These aren’t edge cases anymore. They’re core functionality.

The FTC’s data from April 29, 2026, doesn’t just show rising fraud. It shows that trust has become a liability on social platforms. The systems we built to connect people are now the best tools ever created for mass deception. And no amount of AI moderation will fix that without structural change.

So here’s the real question: when platforms profit from attention, and scams are now optimized to capture it better than most legitimate content, why would they ever fully stop them?

Sources: Engadget, FTC press release

About AI Post Daily

Independent coverage of artificial intelligence, machine learning, cybersecurity, and the technology shaping our future.

Contact: Get in touch

We use cookies to personalize content and ads, and to analyze traffic. By using this site, you agree to our Privacy Policy.