Skip to main content
managing

A comment, a statistic, an experiment, a series of illuming metaphors and ideas for shaping a game plan caught my attention recently as I have been pondering the impact of AI on managers and the organizations they lead.

Let’s begin with Google software engineer Alexander Irpan’s crisp observation on his blog: “Today is the worst AI will ever be.” We will be propelled forward. More will be offered.

We don’t know where AI is taking us, or where we are collectively taking it. As technology journalist Kara Swisher has noted on the tour for her recent book, we didn’t know when the iPhone was launched it would lead to Uber, but we did know it would be transformative.

How transformative is hinted at in this statistic from a survey of American managers: 64 per cent said AI’s output and productivity is equal to the level of experienced and expert managers and potentially better than any outputs delivered by human managers altogether. Managers are at the core of organizations, its co-ordinators and implementation leaders; if that era is about to end, it is a profound if not earth-shaking moment.

A more mundane role organizations have been already turning to AI is summaries of meetings and reports. University of California informatics professor Gloria Mark had the 40 students in her Computer Supported Cooperative Work class test AI summaries of articles. In general, AI was good at outlining the structure of the article and helped the students skim the readings. But it also was, in the students’ words, “generic,” “vague,” “superficial,” “bland,” “uninformative” and “lacked essential ideas” of the articles. Its approach was formulaic, facts presented in the order they appeared in the article, whereas a human would usually prioritize the main points followed by secondary ones. “The students described that the lack of depth hindered them from grasping a deeper understanding of the topics,” she notes on her Substack blog.

And, of course, almost every student encountered what are generously being called “hallucinations” but we might more accurately call errors or fabricated information. With the urge to move quicker, AI summaries will be alluring for all of us. But the current state – the worst AI will be, of course – makes it a hindrance if not properly used.

Consultants Allison F. Avery and Nicolas Maitret shared four metaphors that we are using as we grapple with AI: tool, master, partner and assistant. “Choosing the right metaphor matters: People’s attitudes and behaviours are informed by the metaphors we use to communicate. Metaphors don’t just serve to make sense of our reality – they help shape it,” the duo write on Charterworks.

They point out that the tool metaphor is the only one of the four that isn’t anthropomorphic. Viewing AI as a device used to carry out a particular function reinforces the idea that AI models are created by humans and can therefore be fully shaped by them.

There are fears AI will become our master, gaining authority over the human race. The consultants feel this metaphor calls for either resistance or submission. Resistance could lead to bans of the technology while submission is disempowering and paralyzing. So not a helpful metaphor.

AI is often being referred to as a partner but that suggests a relationship of equals, which isn’t really accurate. Humans are setting the models’ initial goals and designing them, which creates a hierarchy in the relationship.

So the most helpful metaphor is AI as an assistant. And that means, the consultants advise, you will need to understand its strengths and weaknesses, set goals and guardrails, and oversee it continually. Don’t glide over those three prescriptions; each are vital touchstones you will need for operating in the days ahead.

So are four provocative questions that Wharton School Professor Ethan Mollick offered on his blog to accelerate your organization’s future:

  • What useful thing that you do is no longer valuable? “AI doesn’t do everything well, but it does some things very well. For many organizations, AI is fully capable of automating a task that used to be an important part of your organizational identity or strategy. AI comes up with more creative ideas than most people, so your company’s special brainstorming techniques may no longer be a big benefit,” he writes.
  • What impossible thing can you do now? He points out that you potentially can have an infinite number of interns for every employee. “How does giving everyone a data analyst, marketer and adviser change what is possible?” he asks.
  • What can you move to a wider market or democratize? Companies have often been advised to put their effort into serving their most profitable customers, but now services and approaches that were once expensive to customize have become cheap.
  • What can you move upmarket or personalize? Organizational capabilities have increased. “If you were once a small marketing firm, you can use AI to punch above your weight and offer services to elite clients that were once only available from much larger firms,” he says. “Figure out the most exciting thing you can do, and see if you can make it happen.”

Early movers have always been celebrated but there is also contrary evidence, where people later into a market or an idea fared better, notably Apple’s smartphone. You need to find out when to get on the AI bandwagon – perhaps more accurately, create your own bandwagon. But make no mistake: AI is the worst it will ever be today, and there are lots of avenues for your organization to exploit.

Cannonballs

  • When a person gets into the habit of coming directly to you with a problem asking “what do you think I should do?” your instinct will be to provide your recommendation or even make the decision for them. It seems helpful and can stroke your ego. But consultant Stephen Lynch calls it “chasing the bone” – someone throws you a bone, repeatedly, and like a good dog you chase it, making the person dependent on you.
  • A year after the Bud Light boycott began, three business school professors note the brand is still suffering, not bouncing back as with other consumer boycotts. They warn if you edge toward – or plunge into – a political issue you must know the leanings of your audience. The worst situation is when it is evenly divided between the left and right, rather than mostly leaning in the direction you are supporting.
  • Add these two questions to your interviewing arsenal, from consultant Paul Bramson, if you wish to delve into a job candidate’s integrity: “How do you define ethical integrity in the workplace?” and then “based on that definition, describe a situation where you faced a significant ethical dilemma and how you handled it?”

Harvey Schachter is a Kingston-based writer specializing in management issues. He, along with Sheelagh Whittaker, former CEO of both EDS Canada and Cancom, are the authors of When Harvey Didn’t Meet Sheelagh: Emails on Leadership.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe