← Back to all articles
HR Technology

AI in Job Applications: A Guide for HR Teams

AI in Job Applications: A Guide for HR Teams

TL;DR: AI-assisted job applications are increasingly common and not inherently wrong, but they create real challenges for HR teams trying to assess genuine talent — the solution lies in smarter assessment design and clear policy, not outright bans.

Introduction: The CV in Your Inbox May Have Been Written in 30 Seconds

Picture this: you've posted a role on Monday morning. By Friday, 300 applications have landed in your inbox. You start reading, and something feels off. The cover letters are polished, articulate, and oddly similar. The language is confident, the structure is impeccable — and yet, somehow, none of them feel like they were written by an actual person.

Welcome to recruiting in 2025. AI tools like ChatGPT can draft a tailored CV and cover letter in under a minute, and candidates know it.

This isn't a moral panic piece. Candidates using AI to help with applications isn't inherently wrong — and the answer isn't to ban it outright. But it does create a genuine challenge for HR teams trying to assess authentic capability and cultural fit. This guide is written for HR professionals and hiring managers who want practical answers: how to set a clear policy, how to spot the signals of AI-generated applications, and how to design assessments that still surface real talent — regardless of how the application was drafted.


First, Let's Be Honest: Candidates Have Always Had Help

Before we get into detection tools and policy frameworks, it's worth stepping back. Candidates have never applied for jobs entirely alone. CV writers, career coaches, university careers services, well-meaning parents — the idea of the perfectly self-authored application is something of a myth. AI is a new tool in a very long tradition of application assistance.

The real question was never "did they get help?" It's always been: "does this application tell me anything useful about the candidate?"

It's also worth acknowledging what AI genuinely offers to some candidates. For non-native English speakers, AI can help them express their experience clearly without language barriers obscuring their actual capability. For neurodivergent candidates who find written self-promotion difficult, it can level a playing field that was never particularly fair to begin with. For candidates without access to expensive career coaches or well-connected networks, it provides support that was previously only available to the privileged few.

The challenge for HR teams isn't that candidates are getting help. It's that when everyone uses the same tool with the same prompts, applications start to look identical — and the signal you're trying to read disappears into the noise. That's where the real problem lies, and it's a design problem as much as a policy one.


Do Recruiters Care If You Use AI for a CV? What the Data Shows

The honest answer is: it depends on what the AI use is masking.

Most recruiters aren't categorically opposed to AI-assisted applications. According to research cited in Deel's AI-Powered ATS Guide, 98% of hiring managers using AI in their own recruitment processes report efficiency improvements — which means the people reviewing your AI-written CV are often using AI themselves to screen it. There's a certain irony in that.

AI tools can process thousands of CVs in minutes, compared to approximately six seconds per CV when a human reviews 300 applications manually (Employment Hero / HRM Outlook). The volume problem is real on both sides: candidates use AI to apply faster because there are more roles to apply to; employers use AI to screen faster because there are more applications to review. The result is a kind of arms race with no clear winner — and potentially less signal for everyone.

What recruiters do care about is when AI use obscures a candidate's actual ability to think, communicate, or demonstrate relevant experience. A beautifully written cover letter that turns out to have no connection to how the candidate actually speaks or thinks in an interview isn't just unhelpful — it wastes everyone's time.

There's also a legal dimension worth noting. In early 2026, a US court authorised collective action status in the Workday AI lawsuit, allowing job seekers who believe they were unfairly rejected by Workday's AI screening tools to join the case. Plaintiff Derek Mobley alleged that Workday's algorithms disproportionately screened out candidates over 40 and those with disabilities — flagging resume gaps and keywords in ways that had a disparate impact on protected groups. Workday argued it doesn't make final hiring decisions, but the court allowed the case to proceed. The message for HR teams is clear: automated screening tools carry real legal risk, and human oversight at every stage isn't optional — it's essential.


How Can You Tell If a Job Application Was Written by AI?

This is one of the most common questions hiring managers are asking right now, and the honest answer is: you often can't tell with certainty — and that matters for how you respond.

That said, there are practical signals worth knowing. Generic, over-polished language with no specific examples is one of the clearest. Phrases like "I am a highly motivated self-starter with a passion for excellence" or "I thrive in dynamic environments and consistently deliver results-driven outcomes" are AI favourites — and they tell you almost nothing about the person. Cover letters that could have been sent to any company, with no mention of a specific product, team, or aspect of your culture, are another red flag. Suspiciously perfect grammar and structure with no personality or voice can also indicate AI involvement, particularly when it contrasts sharply with how the candidate communicates in other contexts.

The most telling signal, though, is inconsistency. When a candidate's articulate, detailed written application doesn't match how they speak in an interview or perform in a task, that gap is worth exploring.

AI detection tools such as GPTZero and Originality.ai exist, and some hiring teams are using them. But they come with important caveats. These tools are imperfect, produce false positives, and are not reliable enough to be used as the sole basis for any hiring decision. Under the UK Equality Act, fairness and transparency in recruitment decisions are legal obligations — not aspirations. Rejecting a candidate because a detection tool flagged their application, without any further investigation, exposes your organisation to genuine legal risk, particularly if that candidate belongs to a protected group.

Think of detection signals as prompts for better interview questions, not as verdicts. If something in an application feels AI-generated, use that as a reason to probe more specifically in the interview — not to disqualify the candidate outright.


Should Companies Have an AI Policy for Job Applications?

Yes — and most don't have one yet.

This is a genuine gap across UK organisations of all sizes. Without a clear policy, hiring managers are making inconsistent, ad hoc judgements about AI use in applications, which creates both fairness problems and legal exposure. A clear, communicated policy protects everyone.

A good AI policy for job applications covers three things: whether AI use is permitted, whether disclosure is required, and what the consequences of undisclosed AI use are. The CIPD's guidance on responsible AI use in recruitment provides a useful starting framework for UK HR teams.

There are broadly three positions an organisation can take, and each is defensible depending on your context:

Position 1 — AI permitted, no disclosure required. You accept that candidates will use AI and shift your assessment focus entirely to interviews, tasks, and structured evaluation. This is pragmatic and reduces administrative burden, but requires your downstream assessment to be robust.

Position 2 — AI permitted with disclosure. Candidates note where AI assisted in their application. This rewards self-awareness and transparency, and gives you useful information about how candidates engage with new tools — itself a relevant skill in most roles.

Position 3 — AI not permitted for specific components. You require the cover letter, or a specific written response, to be original work. This is reasonable for roles where written communication is a core competency, but must be clearly stated and consistently applied.

Whatever position you take, it must be communicated clearly in the job posting itself. Vague or unstated policies don't just create inconsistency — they create reputational and legal risk if a candidate later challenges a rejection decision.

There's also a GDPR angle that many HR teams overlook. If you're using AI detection tools to screen applications, candidates have a right to know. Under UK GDPR, transparency about automated processing isn't just good practice — it's a legal obligation. A simple sentence in your job posting stating that AI detection tools may be used as part of your screening process is both legally prudent and a signal of organisational integrity.

A practical tip: add a single, clear sentence to every job posting stating your AI policy. It sets expectations, demonstrates that you've thought about this, and — usefully — filters for candidates who read job postings carefully.


Designing Assessments That Still Work in an AI-Assisted World

Here's the core insight that should shape your thinking: if your assessment can be completed perfectly by AI, it probably wasn't testing what you thought it was testing.

This isn't a criticism of how assessments have been designed historically — it's an invitation to revisit them. The good news is that the assessment approaches most resilient to AI use are also, generally, the most valid and legally defensible under UK equality law.

Live tasks are one of the most effective approaches. Ask candidates to complete a short written response, a problem-solving exercise, or a verbal walkthrough of their thinking in real time during the interview. It doesn't need to be elaborate — even a five-minute exercise reveals far more than a polished cover letter.

Specificity probes work well in interviews. When a candidate's application makes a claim — "I led a team through a significant restructure" — ask them to walk you through the first three decisions they made. AI can generate plausible-sounding narratives, but it cannot fabricate lived experience under direct, specific questioning. The details candidates reach for under pressure are revealing.

Values and culture questions that require personal narrative are similarly AI-resistant. "Tell me about a time you disagreed with your manager — what did you do, and what happened?" requires a real story. AI can generate a generic answer; a candidate with genuine experience will give you something specific, textured, and occasionally uncomfortable.

Portfolio and evidence-based review is worth considering for roles where output is the primary measure. For a content writer, a designer, or a data analyst, asking for work samples rather than relying on written applications removes the AI question almost entirely — and focuses assessment on what actually matters.

It's worth noting that structured interviews with consistent scoring criteria are already best practice under UK equality law. Designing AI-resilient assessments and reducing bias in your hiring process are not competing goals — they're the same goal, approached from different angles. Some forward-thinking UK employers are now explicitly telling candidates in their job postings: "We know you may use AI tools. We're more interested in how you think than how you write." That's not a concession — it's a more honest statement of what good hiring has always been about.


How Is AI Changing the Recruitment Process — For Both Sides?

The two-sided reality of AI in recruitment is worth naming clearly. Employers are using AI to screen faster; candidates are using AI to apply faster. The result is higher volume with potentially lower signal — more applications in the inbox, but fewer that tell you something genuinely useful about the person behind them.

For HR teams, this creates a real workload problem. More applications mean more time spent on initial screening, more noise to filter, and — paradoxically — more difficulty finding genuine fit. The administrative burden of recruitment was already significant; AI on the candidate side has amplified it.

This is where tools like Aura can make a meaningful difference — not in the application screening itself, but in freeing up HR time from the repetitive internal queries that consume Monday mornings. When your HR team isn't spending the first hour of the day answering the same holiday entitlement question for the fifth time that week, they have the bandwidth to actually read applications carefully, design better interview questions, and bring genuine human judgement to the hiring process.

AI also offers real benefits for candidates that are worth acknowledging. Greater accessibility for non-native English speakers, support for neurodivergent applicants, and a reduction in the advantage that expensive career coaching previously conferred — these are genuine improvements. The challenge is ensuring that the signal quality in your hiring process keeps pace with the tools candidates are using.


A Practical Checklist for UK HR Teams

Use this as a starting point for reviewing your current approach. None of these items require significant investment — most require a conversation and a policy decision.


The Bottom Line: AI in Applications Isn't Going Away — So Set the Rules

The question was never really whether candidates would use AI. They will — just as they've always used whatever tools were available to present themselves well. The question is whether your hiring process is designed to surface genuine capability regardless of how the application was drafted.

There's a broader shift worth acknowledging here. As AI becomes a standard workplace tool, the ability to use it thoughtfully — knowing when to rely on it and when to apply independent judgement — is itself a skill worth assessing. Hiding from that reality doesn't serve your organisation or your candidates.

The good news for HR teams is that you have more control over this than it might feel. Clear policies, smarter assessment design, and human judgement applied at the right moments are all within reach — and none of them require a significant budget or a technology overhaul. They require clarity of thinking and the time to act on it.

If your team is currently stretched thin answering repetitive policy questions, that's worth addressing too. Aura is built to handle exactly that — freeing your HR team to focus on the strategic, human-centred work that actually shapes your organisation. Explore how it works at aura-hr.tech.

Frequently Asked Questions

Is it acceptable for candidates to use AI when writing a CV or cover letter?

Using AI to help with a job application is not inherently wrong. Candidates have always sought help from coaches, career services, and family members, and AI is simply a new form of that assistance. The more important question for recruiters is whether the application still reveals something meaningful about the candidate's actual capability.

Why do AI-generated applications create a problem for recruiters?

When large numbers of candidates use the same AI tools with similar prompts, applications start to look and sound nearly identical. This makes it very difficult for hiring managers to distinguish between candidates or assess authentic skills and cultural fit. The problem is less about AI use itself and more about the signal-to-noise ratio it creates in the hiring process.

Do hiring managers actually care if a candidate uses AI for their application?

Most recruiters are not categorically opposed to AI-assisted applications. Research suggests that the majority of hiring managers using AI in their own recruitment processes report efficiency gains, meaning they are often using AI themselves to screen the very applications AI helped write. What matters more is whether the AI use is concealing a lack of genuine relevant experience or capability.

Can AI-assisted applications benefit certain groups of candidates?

Yes. For non-native English speakers, AI can help communicate experience clearly without language barriers getting in the way. For neurodivergent candidates who struggle with written self-promotion, it can level an uneven playing field. It also gives candidates without access to expensive career coaches a form of support that was previously only available to more privileged applicants.

What should HR teams do in response to the rise of AI-generated applications?

Rather than attempting to ban AI use outright, HR teams are better served by setting a clear policy on AI assistance and redesigning their assessments to surface real talent regardless of how the application was drafted. The challenge is treated as a design problem as much as a policy one, requiring practical changes to how candidates are evaluated beyond the initial written application.

Arun Mohan
About the author: Arun Mohan

Drives product development and AI innovation in HR. Formerly with Sleek and Expedia, he's an expert in AI, Automation and digital transformation.

Ready to Transform Your HR?

Discover how Aura Hr's AI-powered solutions can revolutionize your human resources management.

Get Started