Skip to content

When AI Interviews AI: The Bizarre Future of Hiring

Overview 

  • The rise of AI candidates: Applicants are increasingly using AI tools to tailor their CVs, automate high-volume submissions, and receive real-time coaching during video interviews.

  • The screening dilemma: Employers are deploying AI screening tools to manage application volumes, creating a scenario where algorithms are essentially assessing other algorithms.

  • Accountability risks: Fully automated hiring processes complicate legal compliance and make it difficult to maintain clear, defensible audit trails.

  • The value of human judgement: Responsible recruitment still demands human oversight, transparent screening criteria, and authentic skills-based assessments to identify true capability.

Neptune Button

It started with AI-written cover letters.

Recruiters noticed the tone first.

Too polished. Too consistent. Suspiciously well-matched to the job description.

Then the screening tools got better at detecting them. The writing tools got better at sounding human. The arms race quietly began.

Then came AI-assisted video interviews. Real-time coaching tools that listen to questions and surface suggested answers in a sidebar. Earpieces. Screen overlays. Response generators running while the candidate speaks.

Then came fully automated job applications. Agents that scrape job boards, tailor CVs to each vacancy's keyword profile, and submit applications at scale.

The candidate in your pipeline may not have written a single word of their application.

And the tool assessing them may not have a human reviewing its outputs before the shortlist is confirmed.

This is where talent acquisition is heading.

In some organisations, it's already arrived.

The AI Applicant Is Already Here

Job application agents - tools that autonomously apply to hundreds of vacancies simultaneously - are commercially available, widely used, and increasingly difficult to distinguish from genuine applicant behaviour.

What this creates is a volume problem with a quality illusion.

  1. Application volumes inflate without a quality increase. A recruiter seeing 400 applications isn't seeing 400 candidates who considered the role. They're seeing a mixture of genuine applicants and automated submissions from people who may not have read the job description.

  2. Keyword optimisation detaches the CV from the candidate. AI tools that tailor CVs to match job description language produce applications that score well against keyword screening and poorly represent actual capability. The candidate who gets through may be a worse fit than the one who didn't.

  3. Video coaching undermines behavioural assessment. If a candidate's responses are being shaped in real time by an AI reading the question and surfacing structured answers, the interview is no longer measuring the candidate. It's measuring the tool.

If the application was written by AI, the CV optimised by AI, and the interview answers coached by AI - what exactly have you assessed?

The AI Screener Is Already Here Too

The irony is not subtle.

Organisations deploying AI screening tools to manage volume are using algorithms to filter candidates who may have used algorithms to get through those filters.

Two systems. Playing against each other.

AI screening tools assess tone, structure, keyword density, and - in video screening - facial expression, vocal pattern, and response latency. They were calibrated on human applicant behaviour. When the applicant is also an AI system, the calibration breaks.

  1. AI-generated applications may score better, not worse. A well-optimised AI application hits keyword targets and avoids the irregularities that sometimes flag genuine human applications as weak. The screening tool may be systematically advancing bot-assisted applications over authentic ones.

  2. Behavioural video assessment fails against coached responses. Tools that assess authenticity through micro-expression analysis were not designed to account for candidates receiving real-time scripted responses. The signals no longer mean what they were trained to mean.

  3. The feedback loop compounds the problem. AI screening tools trained on historical outcomes learn from what got through. If AI-assisted applications consistently advance, the model learns to reward those signals. The bias reinforces itself.

A Process That Can't Be Audited Can't Be Defended

There is a serious compliance dimension South African employers cannot treat as a distant concern.

The Employment Equity Act requires shortlisting decisions to be documentable, consistent, and free from unfair discrimination.

When both sides of the process are automated, the audit trail question becomes genuinely difficult.

  1. Who is responsible for a decision made by an AI screener? The tool made the call. The vendor built the tool. The organisation deployed it. In a CCMA referral, the organisation carries the liability - regardless of which system decided.

  2. Can an AI-assessed shortlist demonstrate EE compliance? If the screening logic is opaque, the organisation cannot explain why candidates advanced or didn't. That explanation is precisely what an EEA audit requires.

  3. POPIA obligations don't pause for AI. Automated processing of candidate data carries the same consent and lawful basis obligations as manual processing. Volume amplifies the requirement. It doesn't reduce it. 

Automated decision-making doesn't dissolve accountability. It relocates it - to the organisation that chose to automate.

What the Process Is Actually Selecting For

If both sides are increasingly automated, the candidates who succeed are not necessarily the most capable.

They are the ones best at using AI tools to navigate AI screening systems.

That is a specific capability. It may be relevant for some roles. It is not a proxy for the full range of capabilities most organisations are trying to hire for.

The risk is a talent pool systematically filtered for tool proficiency rather than job-relevant capability - with the organisation unaware, because nobody paused to ask what the process was actually measuring.

What Responsible Talent Acquisition Looks Like Now

The answer is not to remove AI from the process.

It is to be deliberate about where human judgement stays non-negotiable.

  1. Human review before shortlist confirmation. AI screening reduces volume and surfaces signals. A human approves the shortlist before it moves forward. The tool assists. The recruiter decides.

  2. Skills-based assessment AI can't easily simulate. Work samples, structured problem-solving tasks, and role-specific assessments require demonstrated capability rather than articulated competence. Significantly harder to proxy through coaching tools.

  3. Transparent screening criteria. If your screening logic can't be explained to a candidate who asks, it can't be defended to a compliance auditor who does. Explainability is both an ethical standard and a legal requirement.

  4. Detection awareness without paranoia. Knowing that AI-assisted applications exist, and building review touchpoints that account for this, is reasonable operational hygiene. Assuming every polished application is inauthentic is not.

  5. ATS infrastructure that maintains the audit trail regardless. Whatever the candidate used to apply and whatever tool assisted in screening, the compliance record needs to reflect a documented, defensible process. That record lives in the ATS.

Final Takeaway

The AI-interviews-AI scenario is not a thought experiment.

It is a description of processes already running in organisations that haven't fully registered what's changed.

The recruiters who navigate this well won't be the ones who resist AI on either side. They'll be the ones who stay clear about what the process is trying to measure, where human judgement is irreplaceable, and what the compliance framework requires regardless of how the technology evolves.

AI will keep getting better at applying for jobs.

AI will keep getting better at screening them.

The question is whether the humans in the middle are paying attention.

Because the accountability, as always, stays with them.