LONDON: A Columbia University student recently shone a light on a disturbing corner of today’s job market. Roy Lee, 21, was fed up with the antiquated way that large tech firms were testing job candidates with computer coding riddles you had to memorise, so he created a tool that his peers could use to beat the system.
A translucent window shows the latest version of ChatGPT, which the applicant can use to copy and paste code during a test over Zoom. The recruiter can’t see any of this when screen sharing. Lee now faces expulsion from Columbia, he tells me, but he’s also received multiple job offers from executives at tech firms, impressed with his hacker mindset and chutzpah.
Nobody likes a cheat. Amazon has said it will disqualify applicants who use artificial intelligence during their job interviews. Anthropic, a leading AI company, has made similar pronouncements – not to just weed out incompetent tricksters, but stem the overwhelming flood of new applicants using ChatGPT and other platforms to generate resumes and fill out applications.
HYPOCRITICAL TO BAN JOB APPLICANTS USING AI
But large employers have created this problem for themselves.
More than 80 per cent of companies use AI somewhere in hiring, and one in four use it for the entire recruitment process, according to Resume Builder, a recruitment advisory service. That makes banning applicants from using AI hypocritical, particularly when many of them will be expected to use it on the job.
When companies rely too much on AI for hiring, they also risk impeding women or those with disabilities, some legal complaints already suggest. And by selecting applicants like Lee who are best at gaming an increasingly mechanised system, they may miss out on the best talent.
Perhaps it shouldn’t surprise that the employers who know this best are AI companies themselves. Newcastle, UK-based Literal Labs says it deliberately uses humans instead of software to screen its applicants.
“I believe fundamentally in reviewing each [resume] submission manually,” says its chief technology officer, Leon Fedden. “We want to be super careful about how we are building our team... If we designate this to a stochastic parrot, long term the outcome would likely be not great.” Stochastic parrot refers to a criticism that AI models mostly mimic text scraped from the web.
TO USE OR NOT TO USE AI?
Applicants meanwhile face soul-crushing hurdles. Take Darcy O’Brien, who is in the final year of her math degree at Durham University in the UK, and looking for a junior role in finance. She is academically gifted and competent, but has struggled with being interviewed over Zoom by AI systems.
Finance giants often use firms like HireVue to conduct these penultimate interviews, which flash a series of questions on the screen, giving hopefuls a countdown of seconds to prepare. They then have three minutes to look into their webcam and answer each one.
It’s an awkward experience. “There’s nothing on the screen, but you have to make it sound like you’re talking to someone,” O’Brien says.
She’s tried different approaches over five different AI video interviews: Talking more, using up less time, or trying to sound more confident. “None of it’s working.” Neither was using ChatGPT, which she used to help prewrite some of her answers a few times.
But the secret tip for young applicants might be to avoid using AI at all.
Radhey Patel, a final-year student at the London School of Economics, tells me he started using ChatGPT when it first came out two years ago to help with internship applications. “Now I use it less and less,” he says, pointing out that employers may be starting to notice the generic, jargony features of chatbot-generated text. “Standing out from the crowd is more important than ever.”
That flies in the face of an argument I’ve heard from employers, who say AI is a godsend for young job hunters. One senior partner at a law firm told me that the technology would help level the playing field by helping applicants from disadvantaged backgrounds polish their resumes and craft professional emails. “It’s an equaliser,” he said.
But AI can’t address the fundamental advantages that privileged candidates still enjoy, like alumni networks and the cultural capital that comes from growing up in professional environments. It might level the playing field on presentation, but not on what really counts in getting ahead.
BOTH JOB APPLICANTS AND EMPLOYERS ARE LOSING
With all that in mind, it seems impossible to recommend the right approach for graduates, including those from elite MBA programmes who are already facing a sluggish market for white-collar jobs.
Using AI could help them statistically increase their chances of getting an interview, sure, but not using it can also help them stand out. There are no easy answers.
One thing that’s clear, though, is that we’re witnessing a new kind of asymmetric competition in a job market, where the most valuable skill is the ability to slip through AI gatekeepers.
That’s not innovation. It’s dysfunction masked as efficiency. Ironically, both sides are losing this battle.
The solution isn’t more AI, but a return to human judgment. Companies that maintain more of a human element in their hiring process can give themselves a competitive advantage, and spot those whose talents don’t translate neatly into an AI-friendly format. That is well worth the investment.