Skip to main content
opinion
Open this photo in gallery:

Stephanie Dinkins at work in her Brooklyn studio in New York on June 10. For the past seven years, Dinkins has experimented with AI's ability to realistically depict Black women smiling and crying.FLO NGALA/The New York Times News Service

Gus Carlson is a U.S.-based columnist for The Globe and Mail.

If you are among the many humans who are increasingly suspicious of artificial intelligence, a New York law enacted last week to address the discriminatory dangers of AI technology used in job recruiting, employee evaluations and promotions won’t do much to ease your fears.

The new law, the first of its kind in the world, is aimed at smoking out sexism, racism and other potential biases that have crept into the AI-based hiring, performance evaluation and promotion technology used by an increasing number of the city’s more than 180,000 businesses.

But it just sounds like more expense and aggravation for small businesses, and more money out of taxpayers’ pockets for another government process that treats the symptoms of an increasingly problematic tech trend, not the cause.

In simple terms, the new statute says hiring software that relies on machine learning or AI to help employers and employment agencies based in the city choose preferred candidates or identify bad apples must pass annual third-party audits to show it is free of bias. The results of those audits must be made public. Violations and complaints will be handled by the Department of Consumer and Worker Protection.

The law’s intentions are noble, and its need should raise an eyebrow or two among even AI’s most fervent supporters. Shouldn’t we be concerned the technology’s “intelligence” is generating enough bias to prompt regular audits?

‘I hope I’m wrong’: Why some experts see doom in AI

But shouldn’t we also be concerned that the audits will be conducted using other software, so in effect we will have software policing software, a recipe for even more issues? It seems the people who mourn that there is no “human” left in human resources might be right.

More problematic, to say this is an onerous and probably unenforceable process for many businesses, especially small ones, is an understatement. But this is New York, which has never seen a regulation it didn’t love. As is often the case with its anti-business stance, it’s the little guys who get hurt the most.

Just look at the city’s ludicrous new law banning coal- and wood-burning pizza ovens as a critical tactic in fighting climate change. Is the minuscule carbon benefit really worth killing a few small family-run restaurants in largely immigrant neighbourhoods who rely on that income to live? And there were the crushing COVID-era vaccine mandates on businesses that sank many small employers and had the city’s commercial leaders in revolt.

The new AI law is no exception. Small businesses tend to be heavy users of any technology that streamlines a process, whether it is recruiting, accounting, inventory management or sales tracking. Unlike big companies, they rarely have the staff to do the jobs in-house or the financial resources to outsource them.

In the case of recruiting, internet technology has allowed small businesses with few resources to cast a wide net for talent, evaluate for key skills and capabilities and quickly filling key positions.

To comply with the new law will be expensive and time-consuming. In effect, the compliance costs will probably negate any benefits a small business might glean from using the technology in the first place. The penalties for violations are not yet clear, but the spectre may prompt a move back to the old-fashioned way of doing things – face-to-face, person-to-person – and that is sure to affect competitiveness and, in some cases, survival.

The only businesses sure to profit from the new law are those that sell audit software designed to identify bias in the AI process. Already there is a flurry of activity among these companies beating the drums, selling the downside of non-compliance.

Tucked into New York’s new law is a curious condition. If after being informed a company is using AI for recruiting and employment management, candidates and employees may choose an alternative selection process. That includes talking to an actual human. Fair, perhaps a bit Neanderthal, and probably a sure way not to get a job or a promotion. It’s also a workaround that seems to defeat the purpose of the whole process – the use of AI and the law meant to constrain it.

Is there an upside? Well, if you consider it a good thing that the new law will see the creation of a force of bureaucrats in charge of enforcement, then, yes.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe