A woman in Argentina thought she was building a bond with Hollywood star George Clooney. Over WhatsApp, she exchanged voice notes and video messages with a man who looked and sounded just like the actor. After weeks of emotional engagement, she sent him over ₹11 lakh to join an “exclusive fan club.”
Only, it wasn’t Clooney.
It was a deepfake.
Welcome to the latest frontier of online fraud: AI-generated impersonation scams that are so convincing, even the sharpest minds are falling for them.
Trust, Hijacked by Technology
Deepfakes—hyper-realistic videos or audio created using artificial intelligence—are no longer just tools for viral memes or political parodies. Today, they’ve become powerful weapons in the hands of cybercriminals targeting unsuspecting victims across the globe, including India.
These scams are no longer limited to emails claiming you’ve won a lottery. They wear the faces and voices of CEOs, Bollywood celebrities, or even your relatives. They ask for urgent payments, confidential business information, or emotional support—and victims comply, believing the request is genuine.
In 2024 alone, Indian authorities reported a surge in deepfake-enabled scams that cost individuals and companies crores of rupees. The sophistication of these frauds makes them especially dangerous. The only clue something’s off? A gut feeling.
Building Fake Realities, One Frame at a Time
Creating a deepfake is easier than ever. Scammers pull content from YouTube interviews, Instagram reels, or old corporate webinars. Using AI tools available for free or cheap online, they clone voices, mimic facial movements, and generate fake video calls in minutes.
The scam doesn’t start with tech—it starts with trust.
“Most victims never suspect anything because the video or audio comes from someone they admire or know personally,” said cybersecurity analyst Sandeep Mishra. “By the time the truth comes out, the money is long gone.”
One popular tactic is to impersonate company leadership. Imagine getting a video message from your CEO asking for an urgent fund transfer. The face matches, the voice is familiar, and the message is urgent. Many employees don’t think twice.
India: A Prime Target
India’s digital-first economy makes it a fertile ground for deepfake scams. Millions of Indians rely on UPI for everyday payments, conduct business over WhatsApp, and follow influencers religiously on social media. This connectivity, while empowering, also opens new doors for fraudsters.
In early 2025, a Pune-based startup lost ₹1.8 crore after a finance team member received what seemed to be a video call from the company’s UK-based founder. The founder, speaking fluently and confidently, asked for an emergency payment to a supplier. The call lasted three minutes. The funds were gone in five.
It was only later, when the real founder denied making any such call, that they realized it was a deepfake.
Why Deepfakes Work So Well
What makes deepfake scams effective is their emotional and psychological design. They often exploit urgency (“I need this done now”), authority (“This is from your boss”), and trust (“You know me, right?”). Combined with ultra-realistic visuals, this makes resistance incredibly difficult.
Unlike traditional phishing, there are no spelling mistakes or suspicious links. These scams are personal.
As India’s AI adoption rises, the average citizen remains unaware of what technology can now fabricate.
Fighting Back: What You Can Do
India’s Ministry of Electronics and IT is working on legislation to curb the misuse of AI, but regulation alone won’t stop the spread. Awareness is key.
Here are a few tips to stay safe:
- Always verify requests involving money or sensitive information via a second channel—like a direct phone call.
- Use AI detection tools—startups and cybersecurity firms now offer services that flag potential deepfakes.
- Stay updated on how scammers operate. The more you know, the better your instincts.
- Train employees and family members, especially older adults, who are often targets.
The Road Ahead
The line between reality and fiction is blurring faster than our defenses are evolving. As AI-generated content becomes indistinguishable from real life, we’ll need to sharpen not just our tools, but our judgment.
Scams like the fake George Clooney incident are not one-off stories. They are warning signs of a future where “seeing is believing” no longer applies.
And unless we act swiftly—through education, verification, and regulation we might all fall into the trust trap.