Skip to main content
managing

Long-time tech journalist Kara Swisher compared artificial intelligence to the smartphone in one of her podcasts, and believes it will transform our lives just as profoundly. Vancouver-based tech writer Alexandra Samuel, whose last book was on the remote workplace, labels AI “the new hybrid work.”

Our attention has focused in the last 18 months on finding the right balance between working from the office and working from home. “This transformation in the where of our working lives is about to collide with a transition in who is doing that work. This collision is the new hybrid work, in which organizations are made up of a mix of on-site and remote workers, and teams are made up of a mix of human and artificial colleagues,” she writes on JSTOR Daily.

AI is the latest business buzzword, owing to ChatGPT’s dramatic appearance, accompanied by the word “guardrails,” which to my mind conjures up flimsy metal barriers on the side of a highway that won’t stop a speeding car from hurtling over a cliff or into a rockface. Geoffrey Moore, a management consultant and venture capitalist who wrote the bestseller Crossing the Chasm, argues the term AI is nonsensical, a triumph of rhetoric over reality, presenting the illusion of intelligence.

“ChatGPT has no ideas of any kind – no knowledge – or expertise because it has no semantic information,” Mr. Moore, who began his professional life as an English professor, writes in the Human-Centered Change and Innovation blog.

“It is all math. Math has been used to strip words of their meaning, and that meaning is not restored until a reader or user engages with the output to do so, using their own brain, not ChatGPT’s. ChatGPT is operating entirely on form and not a whit on content.”

ChatGPT is a vehicle for transmitting the wisdom of crowds, but it has no wisdom itself. And because it has no knowledge of content, it can’t be self-governing with respect to that content. Whatever guardrails a manager has in mind would best be put in place before the data gets into ChatGPT, restricting the data sets used to trustworthy sources. “That would ensure the output will be trustworthy or at least not malicious,” he says. Martha Stewart told Ms. Swisher she is working on transforming her storehouse of information into MarthaGPT and other organizations are exploring similar options.

Mr. Moore is not all negative. “It is clear that for any form of user support services, ChatGPT is nothing less than a godsend, especially where people need help learning how to do something. It is the most patient of teachers, and it is incredibly well-informed. As such, it can revolutionize technical support, patient care, claims processing, social services, language learning, and a host of other disciplines where users are engaging with a technical corpus of information or a system of regulated procedures. In all such domains, enterprises should pursue its deployment as fast as possible,” he advises.

Three recent studies back him up. Support agents who used AI could handle 13.8 per cent more customer inquiries per hour. Business professionals who used AI could write 59 per cent more business documents per hour. Programmers who used AI could code 126 per cent more projects per week.

But Mr. Moore also warns that “wherever ambiguity is paramount, wherever judgment is required or wherever moral values are at stake, one must not expect ChatGPT to be the final arbiter. That is simply not what it is designed to do. It can be an input, but it cannot be trusted to be the final output.”

That leaves it to us. And we are not prepared. A Salesforce survey of knowledge workers found many believing it will transform their roles, but 62 per cent say they don’t have the skills to effectively and safely use the technology. Communication and training by management will be essential.

Kathy Baxter, principal architect of Ethical AI Practice at Salesforce, and Yoav Schlesinger, who works with her, highlight in Harvard Business Review five areas for getting into the AI thicket: Accuracy, safety, honesty, empowerment and sustainability.

Organizations need to be able to train AI models on their own data to deliver results that are accurate. Every effort must be made to mitigate bias, toxicity and harmful outputs. Ensure there is consent for the data you use, which can best be done by leveraging open-source and user-provided data.

While there are some cases where it is best to fully automate processes, they believe AI should more often play a supporting role. “Today, generative AI is a great assistant. In industries where building trust is a top priority, such as in finance or health care, it’s important that humans be involved in decision-making – with the help of data-driven insights that an AI model may provide – to build trust and maintain transparency,” they write.

Finally, something rarely mentioned: Think of the environment. “Some of these large language models have hundreds of billions of parameters and use a lot of energy and water to train them,” the Salesforce team notes. When considering AI models, larger doesn’t always mean better. Smaller can increase accuracy and minimize environmental harm.

The new hybrid work that Ms. Samuels defines – remote and in office, plus human and AI – will require even more skills from managers. Jared Spataro, Microsoft’s corporate vice-president for modern work and business applications, says managers “need to know how to manage the time and energy of the people in their organization across time and space. They have to be able to recognize where augmenting human capacity with machine-based capacity is going to help them get the job done faster, better, higher quality.” That may be where the intelligence of AI comes truly into play.

Cannonballs

  • You don’t compete with other companies, argues Basecamp chief executive officer Jason Fried. You compete with costs. Make your economics work and you’ll stay in business.
  • Leadership coach John Mattone notes “much of your natural thinking as a leader, when left unchecked, is biased, distorted, partial, uninformed or downright prejudiced. Your effectiveness as a leader, however, depends precisely on the quality of your thoughts.” Think critically.
  • McGill University management professor Henry Mintzberg says we need to weed out and prosecute the criminals in organizations who break the law rather than focus on the company. Holding executives personally responsible for crimes would send a far stronger message to their successors than holding the corporation responsible.

Harvey Schachter is a Kingston-based writer specializing in management issues. He, along with Sheelagh Whittaker, former CEO of both EDS Canada and Cancom, are the authors of When Harvey Didn’t Meet Sheelagh: Emails on Leadership.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe