Every minute, someone on Facebook or Instagram gets fooled by a fake profile, yet the same company building billion-parameter AI models can’t figure out who uploaded a photo first, that’s not irony, that’s negligence in high definition.
Despite Meta’s dominance in artificial intelligence and machine learning, scams through fake IDs are not only alive, they’re thriving. According to recent findings, Meta removed approximately 8 million scam accounts in just the first half of 2025. The Tech Transparency Project identified 63 scam advertisers who collectively ran more than 150,000 political ads on Meta platforms, spending $49 million. The Federal Trade Commission reports that social media scams contributed to billions in losses, with 45% of social media fraud reports in 2021 involving fake shopping schemes.
These aren’t just statistics, they’re stories of trust broken, reputations damaged, and lives disrupted. And the biggest irony, the tech to fix this already exists inside Meta’s own labs.
In Pakistan, this problem cuts even deeper because of cultural trust. When a cousin sees a familiar face and a known name on Facebook, they don’t think twice before helping. Just last week, my own cousin transferred PKR 10,000 to someone pretending to be me, an account using my profile photo, my name, and my identity. He did it because he trusted me, and because Meta didn’t act.
I’ve reported over 20 fake accounts in the last six months. Every time, I have to post about it publicly, ask people to report, wait for days, and still, no systemic fix. Meta flags harmless posts, deletes active community groups like Soul Brothers and Bahria Town Locals, yet somehow lets scammers run free with cloned accounts.
We don’t need a supercomputer to solve this, the fix is almost childishly simple. Imagine this system:
Check Account Creation Date: If two accounts have the same name, compare which one was created first. The older one gets the trust score.
Check Who Uploaded the DP First: Every image on Meta has a timestamp and hash. Whoever uploaded that photo first is the source. Any new account reusing that image gets flagged immediately.
That’s it, two checks, two timestamps, problem solved.
If Meta wanted, this could be deployed in weeks, not years. The cost? Pennies per verification compared to the billions lost to scams.
Because it doesn’t make money. Fake accounts, even scam ones, add to engagement metrics. They comment, react, message, and share. And unless there’s a major PR disaster, Meta’s moderation systems stay reactive, not preventive.
The company proudly builds large language models, generates lifelike avatars, and moderates memes with precision, but can’t auto-detect when a profile uses the same photo and name combination as another existing account. Their AI removes harmless satire but lets fake identities thrive.
This isn’t a tech limitation, it’s a priority failure.
For people like me, and millions of creators, freelancers, and public figures, the lack of proactive identity protection means daily reputation risk. My verified articles in Dawn, The Nation, and The Express Tribune, my 170K+ followers on FB & 110k+ on Insta, and verified business pages should be enough proof of authenticity. Yet I can’t even get verified because my public brand name doesn’t match my passport name.
Meanwhile, scammers with my photo can spin up an account in minutes and start scamming people within hours.
Meta’s AI is strict where it shouldn’t be and asleep where it must be awake. It bans communities for no clear reason, like the removal of Pakistan’s famous groups Soul Brothers and Soul Sisters, but doesn’t flag obvious impersonations. That shows misaligned priorities, controlling speech is easier than protecting people.
Even WhatsApp, another Meta product, admitted to banning 6.8 million scam-linked accounts in just six months of 2025. Yet Facebook and Instagram remain soft targets for impersonation. The tools exist, but they aren’t being used where they matter.
Let’s revisit the proposed fix, it’s not just simple, it’s fool-proof:
Rule 1: Compare the account creation date.
Rule 2: Compare who uploaded the display picture first.
If a newer account uses the same image and name, it’s either a fan page or a fake. The system can easily check bio differences or ask for identity proof if needed. It’s unbiased, fast, privacy-safe, and effective.
Even adding perceptual hashing (to catch slightly edited images) would make it almost impossible for scammers to hide. No need for expensive facial recognition or manual reporting.
If Meta implemented this, it would save:
Instead, people continue to lose money, and trust in social media continues to erode. Every fake profile that goes live chips away at Meta’s credibility as an AI leader.
If Meta can build AI that generates images, detects hate speech in 100+ languages, and translates conversations in real-time, then why can’t it use the same AI to protect real people from impersonation?
The uncomfortable truth, Meta’s AI isn’t designed to protect users, it’s designed to predict behavior, for ads, engagement, and profit. User safety is an afterthought.
The next time someone’s cousin transfers money to a scammer pretending to be family, don’t blame the victim. Blame the trillion-dollar AI company that can teach machines to see, but refuses to make them care.
Because sometimes, the smartest systems in the world fail not from lack of intelligence, but from lack of humanity.
Muhammad Tahir Ashraf (BeyondTahir) is a tech writer and AI analyst who simplifies complex technology for everyone. His insights have been featured in Dawn, The Nation, and The Express Tribune. Connect with him on LinkedIn, read more on Medium, or visit www.beyondtahir.com for the latest tech perspectives