By The Malketeer
Over the last couple of weeks, something insidious slithered into the digital bloodstream of Malaysian media.
Disguised as a legitimate YouTube commercial, it featured none other than Prime Minister Anwar Ibrahim, calmly endorsing what seemed like a government-backed investment platform.
The video was polished. Convincing. Seemingly real.
Except—it wasn’t.
It was an AI-generated deepfake.
And it wasn’t alone.
This wasn’t some fringe conspiracy or dark web oddity.
This scam ran as a paid YouTube ad, hitting everyday Malaysians between cat videos, football highlights, and recipe tutorials.
The scam linked to dubious domains like careerrocketboost.xyz and gossipspiritnews.live, promising fantasy profits of RM22,000 per week from RM1,100 “investments.”
This isn’t just a deepfake problem.
This is a marketing crisis.
The Anatomy of the Scam
Investigations reveal a familiar formula:
These ads didn’t appear in the shadows—they were amplified by paid platforms.
Which begs the question: How did they get through?
When Programmatic Advertising Becomes the Perfect Storm
In 2025, generative AI can clone anyone’s voice using just a few minutes of audio.
It can mimic facial expressions, lip-sync dialogue, and produce pixel-perfect likenesses.
Combine this with programmatic ad tech—designed for reach, not scrutiny—and you’ve got a disinformation engine on autopilot.
No human vetting. No ethical firewall. No real-time intervention.
It’s not surprising that Malaysia recorded 454 AI-enabled voice scams in 2024, amounting to RM2.72 million in losses, with total deepfake investment fraud climbing to RM2.11 billion across nearly 14,000 victims.
These numbers aren’t theoretical. They’re tearing through communities.
The Trust Paradox
For an industry obsessed with authenticity, transparency, and brand safety, this should be the ultimate wake-up call.
If a Prime Minister’s likeness can be hijacked, cloned, and monetised during a YouTube break, what’s stopping bad actors from targeting:
The very foundation of all marketing—trust—is now in peril.
Marketers Must Lead the Defence
This isn’t a battle for regulators alone.
It’s not just about copyright or privacy.
It’s about preserving the integrity of the entire marketing ecosystem.
Here’s what needs to happen:
1. AI-Ad Disclosures Must Be Mandatory
Platforms must clearly label all AI-generated ad content, especially those involving human likenesses. No exceptions.
2. Platforms Must Build Smarter Filters
YouTube, Meta, TikTok—all have AI detection capabilities for nudity, violence, and copyright. Extend it to deepfakes. If you can detect Taylor Swift’s song, you can detect her fake face too.
3. Cross-Industry Watchdogs Must Rise
Imagine a collaborative watchdog led by MCMC, Securities Commission, media agencies, and tech platforms—dedicated solely to ad verification and identity fraud detection.
4. Public Literacy Campaigns Are Crucial
If Malaysians are being scammed through fake PM videos, it’s time for a full-scale media literacy drive. Teach users how to spot fakes before their wallets are emptied.
It’s Not Just A Scam—It’s A Crisis of Credibility
Too many still see deepfakes as novelty. Funny voices. Celebrity memes. Entertainment.
But when those tools are weaponised to fleece your aunt, ruin a brand reputation, or destabilise public trust in institutions, the joke ends.
This isn’t just a tech issue.
It’s a moral imperative.
And marketers are on the frontlines.
The Day Trust Became Programmable
A paid YouTube ad ran with a fake Prime Minister selling fake promises on a fake platform.
It wasn’t caught. It was funded. It was delivered to unsuspecting Malaysians by an ad engine that didn’t ask questions.
If we don’t act—as marketers, regulators, platforms and citizens—we may soon find ourselves peddling messages in a world where no one believes anything anymore.
Because when trust becomes just another variable in an algorithm, we all lose.
Share Post:
Haven’t subscribed to our Telegram channel yet? Don’t miss out on the hottest updates in marketing & advertising!