TL;DR: The US lawsuit against Workday over AI-driven recruitment bias is a wake-up call for UK HR teams: if your hiring process uses AI screening tools, the Equality Act 2010 holds you accountable for discriminatory outcomes even when no human intended them.
Introduction: A US Lawsuit with a Very British Warning
A man applies for over 100 jobs. He's rejected every single time. He believes the reason isn't his qualifications — it's the AI screening his CV before any human ever sees it. That's the allegation at the heart of Mobley v. Workday, a case that has sent ripples well beyond the US courts where it originated.
Derek Mobley claims that Workday's AI-powered screening tools systematically filtered him out across more than 100 job applications, allegedly discriminating on the basis of race, age, and disability. In early 2026, the court authorised collective action status, meaning other job seekers who believe they were similarly screened out can now join the lawsuit. Workday's defence — that it's a vendor, not an employer, and that hiring decisions ultimately rest with the companies using its tools — has so far failed to get the case dismissed.
Here's why this matters for UK HR teams: you don't need to be using Workday for this case to be relevant to you. If your recruitment process involves any AI-assisted CV screening, shortlisting, or candidate scoring, the Equality Act 2010 has something very specific to say about your legal obligations. This article explains what the case is actually about, how UK law applies, and what practical steps HR teams should take right now.
What the Workday Case Is Actually About
At its core, Mobley v. Workday is a case about disparate impact — the legal theory that a practice doesn't need to be intentionally discriminatory to be unlawful. It simply needs to produce discriminatory outcomes.
Derek Mobley alleges that Workday's AI screening tools systematically rejected applicants over 40 (protected under the US Age Discrimination in Employment Act), applicants with disabilities, and applicants of certain racial backgrounds. The mechanism, according to the lawsuit, wasn't explicit bias — it was the AI flagging CV gaps, penalising certain keyword patterns, and making inferences from data points that correlate with protected characteristics without ever explicitly referencing them.
Workday's central defence has been straightforward: it's a technology vendor, not an employer. The companies using its tools make the final hiring decisions, not Workday. It's a logical argument — but the court rejected it at the motion-to-dismiss stage, allowing the case to proceed. That's a significant signal. It suggests courts are willing to examine whether HR tech vendors can bear liability for the discriminatory outcomes their algorithms produce, not just the employers deploying them.
It's worth noting that a separate individual suit was dismissed in April 2026 with no admission of liability from Workday. But the class action continues, and the precedent being tested is consequential: AI is not a neutral filter. If it produces biased outcomes at scale, accountability doesn't simply evaporate because the decision was automated.
The case also raises a question that resonates far beyond the US: who is responsible when an algorithm makes a decision that affects someone's career? The answer, increasingly, is everyone in the chain — including the organisations that chose to deploy the tool.
Can This Happen in the UK? How the Equality Act 2010 Applies to AI Hiring
The short answer is yes — and the UK legal framework is arguably more comprehensive than its US equivalent.
The Equality Act 2010 prohibits both direct and indirect discrimination across nine protected characteristics: age, disability, race, sex, religion or belief, sexual orientation, gender reassignment, marriage and civil partnership, and pregnancy and maternity. For AI-assisted hiring, indirect discrimination is the primary legal risk.
Indirect discrimination occurs when a provision, criterion or practice (PCP) — even one that appears entirely neutral on its face — disproportionately disadvantages people who share a protected characteristic. It's unlawful unless the employer can demonstrate it's a proportionate means of achieving a legitimate aim. An AI screening tool is, without question, a PCP. If it systematically scores down CVs with names associated with certain ethnicities, penalises employment gaps that disproportionately affect women returning from maternity leave or disabled applicants, or uses keyword matching that filters out candidates from certain educational backgrounds, that's indirect discrimination — regardless of whether anyone intended it.
Crucially, the employer is liable, not just the vendor. Deploying a third-party AI tool does not transfer your legal responsibility under the Equality Act. The Equality and Human Rights Commission (EHRC) has enforcement powers that include formal investigations and, at employment tribunal, discrimination awards are uncapped. There is no ceiling on compensation for discrimination claims in the UK.
There's also a GDPR dimension that many HR teams overlook. Article 22 of UK GDPR gives individuals the right not to be subject to solely automated decisions that produce significant legal or similarly significant effects. Recruitment decisions almost certainly qualify. This means that even where AI screening is used, a meaningful human review must be part of the process — not a rubber stamp after the algorithm has already filtered the field.
Consider a practical example: an AI screening tool trained on historical hiring data from a technology company with a predominantly male workforce will likely learn to replicate that pattern. CVs that look like the historical hires — in terms of language, career trajectory, and educational background — will score higher. CVs that don't will score lower. No one programmed it to discriminate. But the outcome is discriminatory, and your organisation will own it.
The 'Vendor Defence' Won't Protect You
Workday's argument — "we're just the tool, employers make the decisions" — is exactly the argument UK employers might be tempted to make about their own AI vendors. It's understandable. It's also legally insufficient.
Under the Equality Act, employers are responsible for their recruitment processes. Full stop. The ICO and the EHRC have both signalled clearly that outsourcing a decision to an algorithm does not outsource the legal obligation that comes with it. If your AI screening tool produces discriminatory outcomes, "the vendor assured us it was fair" will not constitute a defence at an employment tribunal.
Contractual protections with vendors — bias audit rights, indemnity clauses, data processing agreements — are important and you should have them. But they are a backstop, not a substitute for your own due diligence. If a claim reaches tribunal, the question won't be what your contract says. It will be what your organisation actually did to ensure the tool it deployed didn't discriminate.
This is where the distinction between different types of AI tools in HR becomes meaningful. Purpose-built HR AI tools with transparent, auditable source grounding — like Aura — differ fundamentally from generic AI bolted onto recruitment workflows. When the accountability chain matters legally, the architecture of the tool matters too. Knowing exactly what your AI is drawing on, and being able to demonstrate that, is part of responsible deployment.
How AI Screening Tools Actually Introduce Bias (And Why It's Hard to Spot)
One of the most challenging aspects of algorithmic bias is that it's often invisible until you deliberately look for it. Here's how it typically enters AI screening tools.
Training data bias is the most common source. If a model was trained on historical hires from a homogeneous workforce, it learns to replicate that homogeneity. It isn't programmed to discriminate — it's programmed to find candidates who look like the people who were previously hired. The bias is baked in before the tool is ever deployed.
Proxy discrimination is subtler and harder to detect. AI doesn't need to see a protected characteristic directly to discriminate on its basis. It can infer age from graduation year, infer ethnicity from name or postcode, infer disability from employment gaps, and infer gender from the language used in a CV. Research published in the Cardozo Law Review on AI hiring practices and gender inequality demonstrates how these proxy variables can systematically disadvantage applicants without any explicit protected characteristic field in the data.
CV gap penalisation is a particularly significant risk under UK law. Tools that score down unexplained employment gaps disproportionately affect women (who are more likely to have taken maternity leave), disabled applicants (who may have had periods of ill health), and carers — all protected groups under the Equality Act 2010. If your screening tool treats a two-year gap as a red flag, you may already have an indirect discrimination problem.
Keyword matching introduces gender bias when job descriptions use gendered language. If the AI is matching CVs against a job description that uses predominantly masculine-coded language — and research consistently shows this is common in technical and leadership roles — it will systematically filter out female candidates.
Finally, the black box problem: many commercial AI screening tools don't explain why a candidate was rejected. Without explainability, bias detection requires deliberate, structured auditing. It won't surface on its own.
A 5-Step Due Diligence Checklist Before Using AI for CV Screening
Before deploying any AI tool in your recruitment process, work through these five steps. They won't eliminate risk entirely, but they will demonstrate the kind of reasonable due diligence that matters if a claim is ever made.
Step 1 — Ask your vendor the hard questions. Request a bias audit report. Ask what training data was used, whether the tool was tested for disparate impact across protected characteristics, and what the false negative rates are by demographic group. Ask specifically about age, disability, and race — the characteristics most commonly implicated in algorithmic hiring bias. If the vendor can't provide clear answers, that tells you something important.
Step 2 — Review your job descriptions before you connect them to any AI tool. AI screening is only as fair as the criteria it's matching against. Audit your job descriptions for gendered language, unnecessarily restrictive requirements (such as degree requirements where a degree genuinely isn't necessary), and criteria that may indirectly exclude protected groups. Tools like the Gender Decoder can help surface language bias quickly.
Step 3 — Run a shadow audit on a sample of real applications. Before full deployment, run the AI tool in parallel with your existing process on a representative sample. Compare outcomes by gender, age band, and — where you have data — ethnicity. Look for statistically significant disparities between the AI's shortlist and your existing process. If the AI is consistently filtering out one demographic group, investigate before you scale.
Step 4 — Establish a human review checkpoint. Never allow AI to make a final screening decision without meaningful human review, particularly for borderline cases. Document the rationale for human decisions. This is both good practice and a UK GDPR requirement: Article 22 mandates human involvement in automated decisions with significant effects on individuals. A human who simply approves whatever the algorithm recommends is not meaningful oversight.
Step 5 — Build in ongoing monitoring. Bias doesn't stay static — it can drift as the tool processes more data and as your applicant pool changes. Set a quarterly review cadence. Track your shortlist demographics against your overall applicant pool. If the ratios diverge significantly over time, investigate before you have a tribunal claim on your hands.
What 'Good' Looks Like: Practical Audit Steps for UK HR Teams
Beyond the pre-deployment checklist, ongoing governance is what separates organisations that manage AI risk well from those that discover it at tribunal.
Start by documenting your AI use comprehensively — which tools are used at which stages of recruitment, what decisions they influence, and who has oversight responsibility. You'll need this documentation for any EHRC investigation or tribunal disclosure. If you can't describe your AI recruitment process clearly, you can't defend it.
Create an AI recruitment impact assessment — similar in spirit to a Data Protection Impact Assessment (DPIA) under UK GDPR, but focused on equality risk. Map the potential harms, document your mitigations, and commit to reviewing it annually or whenever you change tools or processes.
Test with anonymised CVs. Submit identical CVs with different names, postcodes, and career histories and compare the scores your AI tool returns. This is a low-cost, practical way to surface proxy discrimination before it affects real candidates. It's also the kind of evidence that demonstrates good faith if a claim is ever made.
Maintain a human override log. Every time a human reviewer overrides an AI recommendation — in either direction — record it and note why. Patterns in overrides can reveal systematic bias that wouldn't otherwise be visible. If your reviewers are consistently promoting candidates the AI scored low, or rejecting candidates it scored high, that's a signal worth investigating.
Engage your employment legal counsel to review your AI recruitment process against Equality Act obligations, particularly if you're using tools trained on non-UK data. US training data may not reflect UK workforce demographics, UK educational pathways, or UK career patterns — and the mismatch can introduce bias that a US-focused vendor hasn't tested for.
The EHRC's technical guidance on the Equality Act and employment is the authoritative reference for UK employers. Bookmark it — and treat it as more reliable than any vendor's compliance claims.
The Bigger Picture: AI in HR Requires Human Judgment, Not Just Human Sign-Off
The Workday case illustrates something broader than one lawsuit: AI tools in HR are only as trustworthy as the oversight structures around them.
The efficiency case for AI in recruitment is real. AI-assisted recruitment processes can reduce time-to-hire by 30–50% (Fyxer Index, 2024), and AI-assisted hiring has been linked to 9% higher quality hires (LinkedIn Talent Trends Report, 2024). These are meaningful gains for HR teams that are already stretched thin. But efficiency without accountability isn't an asset — it's a liability waiting to surface.
The answer isn't to avoid AI in HR. It's to use it where it genuinely helps — answering policy questions, scheduling, onboarding support, surfacing relevant information — while keeping human judgment firmly in the loop for decisions that affect people's careers and livelihoods. According to the IBM Institute for Business Value, only 20% of executives say HR owns the future of work strategy at their organisation. That's a problem. If HR doesn't lead on AI governance in people decisions, IT or procurement will — and neither has the same accountability for employment law outcomes.
This is the distinction that matters: AI that augments HR professionals versus AI that attempts to replace their judgment. Aura, for example, is designed to answer employee questions grounded in verified company policies and labour law — not to make autonomous decisions about people's careers. The goal is to free up HR professionals to focus on the decisions that genuinely require human judgment, not to remove them from the equation.
The HR teams that will navigate this well are those who treat AI as a tool that requires governance, not a solution that removes the need for it. It's called Human Resources for a reason. The human element isn't a limitation of AI in HR — it's the point.
Key Takeaways for UK HR Teams
The Workday lawsuit is a US case with direct UK relevance. The Equality Act 2010 creates equivalent — and arguably clearer — legal exposure for algorithmic hiring bias than the US framework the case is being litigated under.
Indirect discrimination is the primary risk: AI tools that produce disparate outcomes for protected groups are unlawful under UK law, regardless of whether anyone intended the discrimination. The mechanism doesn't matter. The outcome does.
The vendor defence won't protect you. Under the Equality Act, employers own their recruitment processes — including the AI tools they choose to deploy within them. "The vendor told us it was compliant" is not a defence at an employment tribunal.
The practical steps are clear: audit your vendors rigorously, review your job descriptions before connecting them to any AI tool, run shadow tests on real applications, maintain meaningful human review checkpoints, and monitor your shortlist demographics against your applicant pool on a quarterly basis.
If you're evaluating AI tools for HR — whether for recruitment support, policy queries, or employee onboarding — the questions you ask vendors about bias, transparency, and accountability matter enormously. Book a demo with Aura to see how purpose-built HR AI handles these questions differently, and why the architecture of the tool you choose is part of your compliance story.