
Last year, Goldman Sachs received a record 315,126 applications for its internship programme. Google receives roughly 3 million job applications annually; McKinsey gets around 200,000. Even smaller companies now receive, on average, around 250 curricula vitae (CVs) or resumes for each corporate job opening.
That sheer volume of job applications is far beyond what human recruiters can realistically manage.
And with competition now global and top talent snapped up in days, it’s no surprise businesses are leaning on artificial intelligence (AI) to streamline their recruitment.
The promise of AI
Valued at US$577.7m in 2023 and projected to reach US$1.05bn by 2032, the global AI recruitment market is big business.
87% of companies worldwide are already using some form of AI during their recruitment process.
From body-language and voice analysis to online tests, CV scanners and even personality profiles built from social media, AI is rapidly becoming the gatekeeper to opportunity, explains Amanda Trewhella, director at leading UK law firm Freeths.
“AI can sift applications in seconds, spotlight potential talent and even scan databases to identify niche skills for senior roles.
“And while humans are prone to both conscious and unconscious bias – often favouring people who feel ‘like us’ and building teams in our own image – AI can counter this by shortlisting candidates based purely on objective factors such as skills and experience.
“Chatbots can ease pressure on human resource (HR) teams while improving the candidate experience by answering questions, providing updates and keeping applicants engaged throughout the process.
“And beyond selection, AI can craft smarter job descriptions and advertisements, flagging unintentional biased wording and rewriting it to be more inclusive.
“Increasingly, AI is making the first call on who’s in – or out.”
And its use isn’t restricted to large corporations.
Small and medium-sized enterprises are increasingly harnessing the technology to create efficiencies, speed things up, remove manual work, improve reach and remove biases.
That is, to hire fairer, faster and better.
But here’s the thing no one talks about.
Recruitment sounds simple. On paper, it’s logic and keywords.
But in real life, finding the right candidate is more than a ticking experience and qualification boxes; it’s about understanding the organisation’s vision, culture and long-term goals.
AI can scan for experience but it can’t catch that half-second pause when a candidate subtly hesitates when asked about teamwork.
You can feed AI 100,000 CVs. What it won’t do is decode that vague “not sure why, just a feeling” from a hiring manager, and turn it into a decision.
“AI can’t tell you if someone’s adaptable, empathetic or a good cultural fit,” says Ms Trewhella.
“That still takes people. Psychometric tests may give pointers, but real human interaction is irreplaceable when it comes to understanding what makes someone tick.”
And trust in AI is still lagging behind adoption.
Friend or foe?
In its latest employment survey, Freeths found that AI still has some way to go to convince UK businesses of its usefulness and trustworthiness in recruitment.
“Only 5% of respondents said they trust AI to recruit the best candidates,” says Ms Trewhella.

That’s down from 11% in 2024.
Job seekers are sceptical too. Nearly half of US job seekers believe AI is more biased than human recruiters.
“This may be understandable given that AI is still in the early stages of implementation and public opinion remains largely sceptical – especially in light of several widely publicised cases where AI systems have demonstrated bias, favouring certain groups of candidates over others,” explains Ms Trewhella.
What's more, these systems can filter out highly qualified candidates if their profiles don’t match the exact criteria specified in the job description.
“Strong candidates risk being overlooked simply because their information doesn’t fit the system’s expectations,” says Ms Trewhella.
“And that’s a particular concern for neurodivergent applicants who may present their experience differently on their CVs.
“At the same time, some candidates may learn to ‘game’ the system, tailoring their CVs to match what the AI is scanning for, even if they’re not the best fit for the role.”
Bias by design
“At the moment, AI is only as good as the data it’s fed,” warns Ms Trewhella.
Train it on historical hiring decisions, and it risks reinforcing old biases.
“If a company has historically hired more men than women, an AI trained on that data may ‘learn’ to prefer male candidates – overlooking skills, experience and potential. So, instead of removing bias, it hardwires it into the system.”
The fix?
Review your data, strip out patterns of bias and keep updating the system.
“Regular audits of your AI are essential – checking for bias across gender, ethnicity and other protected characteristics,” says Ms Trewhella.
“Make sure diverse teams are testing it, because they’ll spot things others miss. And test your system with fabricated data to see if certain candidates are unfairly favoured or filtered out.
“At the same time, train your AI to recognise transferable skills and alternative qualifications – not just rigid criteria – so it spots potential, not just patterns.”
Who’s really making the call?
Hiring algorithms must be transparent so every decision is clear and explainable – to both recruiter and candidate, explains Ms Trewhella.
“There should never be a situation where a recruiter cannot explain why one person was chosen over another. Blaming the AI won’t stand up; the burden of proof will always fall on the business.”
And while AI can be an incredibly powerful tool, it’s not perfect, she says.
“It works best when paired with human insight and used with a clear understanding of what it can, and can’t, do.
“Resume screening, interview scheduling, follow-ups, database clean-up – AI can take care of the time-eaters. That frees recruiters to do what only people can: engage candidates, assess cultural fit and handle complex or sensitive situations with empathy.
“AI can support the process, but it should never replace people. At the end of the day, the final hiring decision must always be human.”

Regulations are being introduced in various regions to ensure fairness, transparency and data privacy, particularly regarding bias and discrimination. It is important to comply with these rules and follow best practices, including human oversight and candidate disclosure. For guidance on how these regulations apply to our processes, contact our legal team.