Bengaluru: With AI voice and avatar tools now easily available, there has been a surge in deepfake interviews where job applicants use GenAI technology to manipulate their voice, appearance or even identity.
Reports estimate that 10-15% of online interviews already show signs of cheating. Recently, Bengaluru-based AI-powered interview intelligence platform InCruiter, during an interview, found that a participant was answering technical questions and engaging naturally. Though the interaction initially appeared normal, the session was flagged by InCruiter's continuous deepfake detection system, which identified subtle visual anomalies suggesting the presence of a synthetic identity overlay.
Facade of deepfakes: How deeply fake can things get?Usually, a deepfake interview candidate uses synthetic media to trick employers. He/she uses AI-assisted content and misrepresents their presence. In this case, InCruiter uncovered a deepfake impersonation attempt for a global fintech and private credit platform client.
The platform's analysis revealed that the individual on screen was not the real candidate, but an AI-generated avatar designed to replicate the candidate's appearance and voice, likely intended to bypass the automated evaluation process. It detected behavioural and visual inconsistencies throughout the session and generated a detailed proctoring report containing timestamped evidence, trust scores, and flagged indicators of manipulation.
When InCruiter launched its Deepfake Detection technology some months ago, the system flagged fraudulent activity in 25-30% of suspicious interview sessions.
Anil Agarwal, Founder and CEO of InCruiter, said AI-led interviews are rapidly becoming the future of hiring at scale. But as organisations adopt automation, they must also be prepared for increasingly sophisticated forms of fraud. Deepfake impersonation is a real and emerging risk.
HR experts warned that deepfake interview is now becoming a real threat to organisations.
Ankit Aggarwal, Founder and CEO of Unstop, said online interviews have expanded access at scale, but they have also introduced a real challenge around trust. Candidates today attempt to game the system through external assistance, parallel devices, and other methods, which create a credibility gap for recruiters and impact the quality of hiring decisions.
Based on InCruiter's observations, IT and tech companies represent the largest share of such fraud at around 60%, followed by BFSI at 15%, BPOs and KPOs at 10%, startups at 10%, and manufacturing and core sectors at 5%.
Aditya Narayan Mishra, MD and CEO of CIEL HR, said that as these fake employees get credentials, they will receive access to all confidential data of the firm. There are instances where candidates use AI-generated avatars or pre-recorded responses to misrepresent their identity, which can compromise hiring quality and organisational security.
To counter this, organisations are rethinking their interview design. Interviewers are being trained to ask contextual, situational questions that require spontaneous thinking, making it difficult for scripted or pre-recorded responses to pass through. Additionally, many companies are reintroducing at least one in-person or live, closely monitored final round to validate authenticity, Mishra said.
He added that it is also essential to strengthen background verification. It should not be treated as a checkbox exercise; verification needs to be intentional, thorough, and layered, combining identity checks, employment validation and, where necessary, digital footprint analysis.
"Organisations are actively addressing this through a more secure, tech-led approach, with platforms enabling end-to-end online assessments backed by AI proctoring that continuously tracks candidate behaviour, along with strong impersonation checks, real-time screen monitoring, and full screen enforcement to prevent any navigation outside the test environment. These measures help ensure that the person being evaluated is authentic and that the process remains uncompromised," Aggarwal said.

