Meta Touts Detection Efforts Ahead of Anti-Scam Summit

Meta Touts Detection Efforts Ahead of Anti-Scam Summit


This is interesting timing.

Today, ahead of the Global Anti-Scam Summit, which is being held in Washington this week, Meta has shared some insights into its evolving efforts to combat scams in its apps, including these impressive data points:

  • In the last 15 months, reports about scam ads have declined by more than 50%, and so far in 2025, we’ve removed more than 134 million scam ads.
  • In the first half of 2025, our teams detected and disrupted nearly 12 million accounts – across Facebook, Instagram, and WhatsApp – associated with the most adversarial and malicious scammers: criminal scam centers.
  • We’re using facial recognition technology to stop criminals that abuse images of celebrities and other public figures to lure people into scams.

These are impressive numbers, right? 12 million accounts is a lot.

But then again, at Facebook’s scale, with over 3 billion users, 12 million is nothing, a tiny fraction of its total user base.

And when you also contrast these numbers against recent reports that Meta has knowingly generated around $16 billion per year from scam ads, which its system allows to run, despite it detecting questionable elements within these promotions, those figures don’t actually seem as impressive in contrast.

Those numbers stem from a Reuters investigation into Meta’s internal processes to detect and filter out potential scam ads.

According to the report, Meta “failed to identify and stop an avalanche of ads that exposed Facebook, Instagram and WhatsApp’s billions of users to fraudulent e-commerce and investment schemes, illegal online casinos, and the sale of banned medical products.

The main flaw, Reuters’ report suggests, is that Meta’s systematic thresholds for what constitutes a scam ad are too lax, which has allowed many of these ads to be shown to users, despite Meta’s system flagging concerns.

That, in the end, has led to Meta generating billions from these scam ads. And when you also consider that around 23% of adults globally lost money to scams in 2024, with Facebook being the second most cited source of such activity (WhatsApp came in first), that does somewhat belie Meta’s topline figures that promote its evolving security systems.

To be fair, the Reuters report includes data from 2024, and Meta is saying here that it’s seen a 50% reduction in reports of scam ads over the past year. So it may well have improved since then, but the fact that Facebook is such a prominent vector for such scams doesn’t really support Meta’s claims, at least at this stage, that it’s doing more to protect users.

And the impacts of this extend to all social platforms. When someone loses money to some Facebook scam, they’re way less likely to try social media shopping options again, while they also warn their friends about potential scams, warding off more potential in-stream shoppers.

That, at least in part, may be why Western consumers are more reluctant about social shopping than those in Asian markets, where the preference seems to be to incorporate as many functions as possible into a single app.

Western consumers are more likely to keep their social and entertainment activity in certain apps, and shopping in other, trusted platforms. The prevalence of scams, then, may well be what’s restricted platforms like TikTok from making big money out of its in-stream sales push, which is why increased security, and a focus on this element is critical.

But if Facebook’s not looking to drive in-stream sales, and it can generate billions from scam ads, it’s not clear whether it’ll have the motivation to really address this element.

The numbers above suggest that it is taking more action, and it may well be improving on this front, but at this stage, the rate of scams in Meta’s apps is a problem for the broader social media industry.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *