Ashley Nunes studies regulatory policy at MIT’s Center for Transportation & Logistics.
The year’s end saw more than 94,000 jobs added to the economy. That, according to Statistics Canada, is the largest incremental jobs increase since 2012. The agency also reports that overall employment has grown by more than 200,000 in the past year and the national unemployment rate now sits at 5.6 per cent. That’s the lowest jobless figure on record in more than four decades.
A tightening labour market comes as more workers are willing to switch jobs. A recent survey found more than two-thirds of respondents were either actively searching for a new position or would consider taking up one were they approached. The reason comes down to money. Job-hopping can mean higher earnings and workers value earnings over other company perks such as gym membership, paid time off and an easy commute. In the digital era however, seeing higher earnings means taking on a new challenge – impressing robots.
By one estimate, more than 70 per cent of resumes are rejected before a recruiter has a chance to review them. The reason? “Algorithmic hiring,” as it is commonly known. Companies are increasingly turning to software code to source new talent. Demand for these algorithms has spiked in recent years and this trend is expected to continue. This means a robot – rather than a human recruiter – could soon decide whether you land your next job.
There are many reasons for this change. For one thing, combing through resumes takes time and time is money. While human recruiters can spend hours sifting through applicant data, algorithms – backed up by raw computing power – need mere seconds. This reduces the time needed to hire, making software-driven recruitment a less resource-intensive exercise. That’s good for both employers and potential recruits.
Conventional hiring practices are also prone to bias. Research shows that despite holding nearly identical job credentials, applicants with non-English sounding names – particularly Chinese, Indian and Pakistani – are much less likely to land an interview when hiring decisions are left to humans. Applicants from low-income neighborhoods suffer a similar fate, as do the elderly and women. Algorithmic hiring can cut through these biases, opening up opportunities to qualified candidates and marginalized communities that may have otherwise been overlooked.
At least, that’s the idea. But algorithmic hiring introduces its own challenges. For one thing, learning what companies want (and need) takes time. That’s as true for algorithms as it is for human recruiters. This means that for algorithms to make the right call, they may – by virtue of sifting through initially unfamiliar data – also make the wrong ones. This means some qualified candidates could unintentionally fall through the cracks.
A larger problem is that algorithmic hiring relies on past records to inform future decisions. “Training" data – which companies have piles of – provide insight into the types of candidates recruiters have previously signed off on. Algorithmic hiring simply imitates this process using fewer (human) resources. The problem? It also reintroduces biases into human decision-making – the same biases these algorithms are meant to purge.
Biases such as age discrimination – a practice that, despite being officially banned, has either been seen or personally experienced by more than 60 per cent of older adults. It’s also something that social-networking giant Facebook stands accused of. A pending class-action lawsuit accuses the company of perpetuating age-discrimination. The medium? Well-honed algorithms that “disproportionately direct ads to younger workers at the exclusion of older workers.” The company denies these charges. So does telecom provider T-Mobile, which is also named in the lawsuit.
And then there’s Amazon. In 2014, the Seattle-based company tasked its engineers with automatizing the hiring processes. Yet a year later, these efforts were scrapped. The reason? Amazon’s engineers – like most of the tech industry – tend to be overwhelmingly male. That’s something the company’s hiring algorithm spotted when it plowed through historical hiring data. It subsequently surmised that male candidates were preferable to females and purportedly started penalizing candidates from the latter group. Although the company subsequently tried to fix this, there was no guarantee that discriminatory hiring practices could – thanks to technology – be avoided.
In retrospect, these experiences with algorithmic hiring are hardly surprising. As the American Civil Liberties Union’s Rachel Goodman notes, if you simply ask software to discover other résumés that look like the résumés in a “training” data set, reproducing the demographics of the existing work force – troubling as those demographics may be – is virtually guaranteed. Put another way, algorithms aren’t inherently biased. They learn that from us.
Companies should use technology to recruit human capital. Doing so cuts costs and boost efficiency. But technology comes with risks, and should be treated accordingly. After all, companies – not algorithms – will ultimately be held responsible for who gets hired (and who doesn’t).