It is a lazy Sunday afternoon. You are busy doomscrolling. You stumble upon a video of a Bollywood A-lister endorsing a not-so-famous brand.
You take a moment to question why? But then the brand gains your trust and you rushed to purchase it. And boom, all you get is a box full of disappointment.
What do you do in such a situation? Blame the celebrity? Or not? Why not, you ask?
Because the video that had caught your attention might not be real. And the celebrity that you saw in it might be a victim of deepfake, AI-generated and altered photo or video of a person used without their permission.
The rise of Deepfakes that are, almost, artificially altering the reality, is becoming a nuisance, so much so, that even Israel's Prime Minister Benjamin Netanyahu had to release a clarification video to prove his videos were not AI-generated.
The issue began with the breaking news of his death. Quashing the rumors, he released a video showing him sipping coffee at a cafeteria, which reports say was recorded in Jerusalem.
In the clip, he sarcastically remarked, "I'm dead for coffee," while ordering a steaming cup at the cafe.
He then raised his hand toward the camera and asked, "Do you want to count the number of fingers?"-a reference to speculation on social media claiming his recent televised address was AI-generated and showed him with six fingers on one hand.
The video was labelled as "deepfake" by Grok.
Soon after, another video of Netanyahu sparked "deepfake" rumors where social media users pointed to what they claimed were irregularities.
One user shared a slowed-down version of the clip, highlighting that at the 0:28 mark, Netanyahu's ring seems to disappear, only to reappear around 0:30.
Others flagged visual inconsistencies, including what they described as distorted fingers in certain frames, suggesting the video might be AI-generated.
For those who are still wondering why is deepfake an issue, this should be your answer.
The question here is when AI is over-smarting animators and graphic designers, how can one identify what is deepfake and what's not? The answer, however, is not that simple.

"The technology behind deepfakes has advanced to a point where even a trained eye struggles to tell the difference. What's particularly unsettling is that most of this content is built using material people have willingly posted online, their photos, videos, reels on Instagram and clips on YouTube. Your public social media presence is essentially a training dataset for someone with bad intentions," Aaron Bugal, Field CISO, APJ at Sophos, said.
While going private on your social accounts helps, but it is not a complete fix. Someone you know could reshare your content, and suddenly it is out in the wild again.
"So awareness and monitoring become your next line of defence. I recommend setting up Google Alerts and Bing News Alerts for your own name. It sounds crazy, but it's incredibly effective. The moment something surfaces online that mentions you, you're notified allowing you to act fast to request a takedown or publicly set the record straight," added Bugal.
From top politicians to Bollywood A-listers, if no one is spared, so how can one stay safe?
"We're entering a space where "seeing is believing" is no longer a reliable standard. The first discipline is to slow down before trusting or sharing any content. From an investigative standpoint, verification must now be layered. Check the source. Assess whether the content is coming from a verified origin or an untraceable forward. Cross-check if the same information is being reported by credible platforms. Genuine content rarely exists in isolation. Context also matters. Deepfakes often surface during high-emotion events such as elections, crises, or celebrity incidents. That itself is a signal to pause," Sagarika Chakraborty, CEO - India & Gulf, IIRIS Consulting, explained.
What is also encouraging, Bugal said, is that regulators are starting to take this seriously.
"The MeitY advisory in India, which put social media platforms on notice about removing deepfake content or risking the loss of safe harbor protections, is a step in the right direction. Platforms need to be held accountable, and that kind of regulatory pressure matters," he said.
Quick summary - click for full detailsConcise summary of key highlights
Facade of deepfakes: How deeply fake can things get?
In one lineDeepfakes blur reality with AI-generated media, challenging trust and requiring new detection and regulatory approaches.Key points• Deepfakes undermine trustAI-generated videos and images are becoming indistinguishable from reality, eroding the foundation of visual and auditory trust in media.• Rapid creation toolsOpen-source tools now allow anyone with basic tech skills to create convincing deepfakes in hours using scraped social media content.• Widespread risksFrom reputational damage to fraud, deepfakes pose threats at individual, organizational, and societal levels, including 'deepfake job applicants'.• Detection challengesEven experts struggle to identify deepfakes as technology advances, with subtle inconsistencies disappearing in high-quality fakes.• Regulatory and cultural shiftsGovernments and platforms are beginning to act, but a zero-trust mindset and digital authentication frameworks are needed for long-term solutions.Key statisticsMeitY advisory in India (2023)Regulatory action timelineOpen-source tools enabling creation with minimal effortBarrier to entry collapseOrganizations unprepared for synthetic impersonation in hiringDeepfake job applicant riskProcessed with AI. Reviewed by DH Digital Team.
Since everything on the Internet leaves a footprint. So does these deepfakes. But that window, Bugal said, is closing faster than most people realise.
"If you're watching a video carefully, there are things that don't quite add up. The lighting on the face sometimes looks slightly off compared to the background. The edges around the hair or jawline can appear blurry or seem to flicker. The mouth movements don't always sync perfectly with the audio. And the facial expressions can look a little too rigid, almost like the person is wearing a mask rather than actually emoting," Bugal explained.
He added that in live video call situations, which are where deepfakes are increasingly being deployed, watch for reluctance to move the camera or to do something spontaneous, like scratching their nose or turning their head quickly. These are things a real-time deepfake struggles to handle convincingly.
"Audio may feel slightly off in tone or pacing. Background distortions, especially around hairlines or motion edges, are also telling. From a forensic lens, metadata inconsistencies, compression artefacts, and absence of source lineage are key flags," Chakraborty pointed out.
However, high-quality deepfakes are now crossing the threshold of easy human detection.
"But here's the honest truth, as the technology gets better, these tells are disappearing. What worked as a detection method six months ago may not work today. We're reaching a point where human observation simply won't be enough, and we'll need technology to fight technology," Bugal added.
The biggest risk, Chakraborty said, is not that deepfakes exist. It is that people still trust everything they see.

How are deepfakes created?
This is where, Bugal said, the story gets a little uncomfortable.
Not too long ago, creating a convincing deepfake required serious technical expertise, expensive equipment, and a lot of time. That's simply not the case anymore.
"Today, there are open-source tools freely available online that can generate a realistic synthetic video or clone someone's voice with minimal effort. We're talking about someone with a regular laptop and a few hours on their hands. No coding background, no studio, no special hardware. The AI does the heavy lifting. It learns from existing images and videos of the target person, usually scraped from social media, and builds a model that can mimic their face and voice remarkably well," he explained.
That shift from a niche technical capability to something almost anyone can do is what makes this such a serious and urgent problem. The barrier to entry has essentially collapsed, and that changes the threat landscape entirely.
What other threats do deepfakes pose?
The threat landscape is multi-layered.
"At an individual level, deepfakes can lead to reputational damage, harassment, extortion, and psychological distress. At an organisational level, they can be used for fraud, impersonation of leadership, or manipulation of financial decisions," Chakraborty said, while adding that increasingly, deepfakes are being seen as tools of information warfare.
Not only this, you have reputational damage like politicians being shown saying things they never said or celebrities being placed in compromising situations they were never part of.
Aaron Bugal, Field CISO, APJ at SophosBut the threat I find most underappreciated right now and one that I think businesses are dangerously underprepared for is 'deepfake job applicants'. It sounds like something out of a Black Mirror episode, but it's happening. Someone uses a synthetic video persona and a cloned voice to pass a remote job interview, gets hired, and then has legitimate access to your systems and data. From the inside, they can do enormous damage quietly and over time.Most HR teams, he said, aren't trained to spot this. Most organisations don't have processes in place to verify that the person on the other end of a video call is who they claim to be. And that gap is being exploited right now.
What to do if you see a deepfake of yourself?
Speed and structure, Chakraborty said, are critical to handle such a situation.
"First, document everything including screenshots, URLs, and timestamps. This forms evidence. Second, report the content immediately to the platform for takedown. Third, initiate legal recourse under applicable cybercrime, defamation, and identity misuse provisions. If required, engage cyber forensic experts to trace origin and circulation networks. The longer such content remains online, the harder it becomes to contain," Chakraborty explained.
Why not ban such deathtraps?
Things are often easy to speak and hard to implement. Such is the case with the deepfake as a blanket ban is neither practical nor effective.
"This is because the underlying technology has legitimate applications. The focus must be on regulating misuse. There must be clear legal consequences for malicious creation, impersonation, and non-consensual distribution. Platforms must also be held accountable for timely takedown and traceability. Regulation needs to be balanced but firm," Chakraborty opined.
Why are such apps available freely?
This is, what most experts believe. a double-edged sword.
"Deepfake technology is built on broader AI innovation that powers industries such as entertainment, gaming, accessibility, and content creation. Open ecosystems accelerate innovation, but they also enable misuse. This is a classic dual-use challenge where technology evolves faster than regulatory frameworks," Chakraborty explained.
Deepfake photos of Bengaluru woman uploaded online; FIR lodgedIf all is not well, what does the future hold for "truth"?
Bugal believes we're heading into genuinely uncharted territory.
"The philosophical problem deepfakes create is profound. They attack the very foundation of trust. We've always relied to some degree on seeing and hearing something to believe it. Deepfakes dismantle that assumption entirely. And as the technology improves, even the current detection methods will become obsolete," he opined.
However, what gives him some hope is the parallel we can draw with how we handled trust on the internet.
"We now have digital certificates that verify whether a website is legitimate or whether an email is genuinely from who it claims to be from. I believe we'll need something similar for video content, a way to digitally sign and authenticate footage so people can verify its origin and integrity. That kind of framework will become essential," Bugal added.
On the human side, the shift we need is cultural. Whether you're hiring someone remotely, receiving a video message from a supposed colleague, or watching a clip of a public figure, you can't just take it at face value anymore.
"You must think like a security professional, question what you see, verify before you trust, and never hesitate to probe further when something feels off. That zero-trust mindset isn't just for IT teams, it's for all of us," Bugal advised.
The deepfake dilemma: Speed or safety?Deepfakes can help, too
However, not all is bad here. As Chakraborty believes, when used ethically, deepfake can also prove to be beneficial.
"The technology itself is not the problem. The absence of guardrails is," she asserted.
She added that deepfakes enable advanced visual effects in cinema, language dubbing, educational simulations, and accessibility tools such as voice reconstruction.
"Even in investigative training, controlled simulations can be valuable," Chakraborty concluded.

