‘High Risk, High Reward’: How Leadership Should Embrace AI in the Workforce
Like it or not, artificial intelligence is pushing leadership in a new direction.
AI’s influence on humanity and the way we work, has been a hotly debated area of contention for months. To some, the opportunities the AI era will usher in for organizations and workforces will transform societal and working conditions for the better, lead to happier, more fulfilled employees, more satisfied customer bases and ultimately deliver higher profitability. To others, we have caught the proverbial tiger by the tail.
Either way you slice it, AI is here to stay. And for senior leaders, that means evolving, rather than replacing, existing skills — both their own and those of their workforces.
Most (94%) business leaders agree that AI is critical for success, per a 2022 Deloitte report. And it will be AI-informed leadership that sets the most effective organizations apart from the competition in future, experts say. Judging when not to use it will be just as critical.
The frantic hype around AI taking over our jobs is slowly making way to a more considered conversation around how AI can be used to improve the day-to-day working experience. To start, senior leaders will need to cultivate interpersonal skills even more. And if AI is to handle time-consuming technical aspects of, for example, a CEO’s role, the human element of interpreting data and asking AI the right questions to make sound judgments, will be crucial.
“To be an effective leader in an AI environment they need to cultivate the human capabilities in particular,” said Sue Cantrell, vp of products, workforce strategies at Deloitte Consulting. “Their judgment will be critical. AI can inform their decisions, with data, with suggestions, with recommendations, but leaders still need to be able to frame the issues, make the smart calls and ensure they’re well executed.”
And to do that senior leaders need to experiment. “They need to disrupt their own decision-making style to fully exploit AI capabilities, like temper their convictions with data, test their beliefs with experiments, and most importantly figure out how to direct AI to attack the right problems,” added Cantrell.
As part of this period of experimentation, leaders will need to road test AI. Some are already underway. One third of executives say that over the next 12 months they plan to run four “trust-building” processes: improve the governance of AI systems and processes; confirm AI-driven decisions are interpretable and easily explainable; monitor and report on AI model performance and protect AI systems from cyber threats and manipulations, per a 2023 Trust survey from management consultancy PwC.
“CEOs will need to develop a deeper understanding of the AI tools they are using than any other business technology they have engaged with before, because of the wide ranging ways AI can affect their business,” said Anthony Abbatiello, workforce transformation practice lead for management consultancy PwC, which recently pledged $1 billion toward AI investment in its workforce. “AI is not a technology that can be given to a CTO or a similar position, it is a tool that must be understood by the CEO and the entire management team.”
But leaders also need to pay close attention not just to the tech itself, but the effect it’s having on their workforces. With all the recent attention on generative AI, accelerated by the mainstream adoption of ChatGPT, speculation about it replacing jobs has heightened. CEOs need to anticipate and mitigate those fears, by being transparent about plans to incorporate it, experts say.
“It is critical to develop personal skills to help ease these fears amongst workers,” said Abbatiello. “Knowing that AI will change many roles in the workforce, CEOs should encourage people to embrace experimentation with the technology and communicate the upskilling opportunities for them to work in tandem with the AI, rather than be replaced by it,” he added.
Bolstering employee confidence that CEOs will continue putting humans at the center and that employees are highly valued in this new AI era, is top of mind for Jen Berry, U.K, CEO of digital advertising agency Digitas, part of Publicis Groupe, which has its own AI research and development hub. “I call it the centaur effect,” she said. “I’m a big believer in human interaction and connection. I don’t think that will go away. Automation and speed will help us do different things, focusing on more strategic, creative endeavors. But it will still be on us to understand our people on a human level,” she added.
Given the rapid pace of AI development, CEOs “need to be even more comfortable in uncertain waters,” added Berry. “And we need to continue relying on our compass of: What does this mean for our business, our clients and how do we then prioritize, flex and adapt? This is a whole new level of speed in which the marketplace is moving. But it’s in our DNA to flex and adapt, so that’s the part of leadership that will be really important.”
Freeing up ‘critical thinking’
Concern around employee productivity has been top of mind for some senior leaders over the last year. Plus the rolling mental health narrative, and stories of employee burnout, that have gained traction since the start of the pandemic, remain equally worrisome. And while there is good cause to say that the threatening hype around AI isn’t helping with that, experts believe that AI can help mitigate both, and even improve the meaning and purpose of work.
The theory goes, that if people are saving time on more administrative, tedious aspects of their work, they’ll have more brain capacity for creative and strategic thinking.
And this will evolve the role of leaders in an exciting way, stressed Bryan Hancock, global leader of McKinsey’s talent practice. “There is exciting potential for AI to be a copilot, so leaders can spend more time managing and leading,” said Hancock. That should also open the door for more “tailored, and informed discussions, interventions, intelligence as a leader that lets you really zero in on where you can uniquely make the biggest difference,” he said.
For example, say you’re a regional manager for a trucking company and have a driver shortage. This type of problem might be misunderstood as being due to a tight labor market. But if AI is analyzing the right data, it could highlight the issue is actually in a totally different area: driver retention. And that can turn the spotlight on the area that needs to be addressed faster. It can help determine which workers are most likely to stay, and under what conditions, and which are more likely to turn over. A bit like the customer relationship management software that airlines use, to create a more personalized travel experience. “Then we can be even more tailored in our intervention so that we notice that if we have someone new, working a late shift, working with a new supervisor, who has perhaps not had great people scores, then we know we need to swarm that driver and make sure we’re communicating with them and supporting them in a different way,” added Hancock.
AI will also have a prominent role in the future of product innovation, according to Matt Summers, global vp of leadership at the NeuroLeadership Institute. In fact, early adopters of AI have reported a 35% improvement in innovation and 33% in sustainability by investing in AI over the past three years, per a Dataku 2023 report. “That’s going to free up senior leaders to be able to do some of the critical thinking around the options that AI provides in that innovation space,” he said. But what it won’t do is eliminate senior leadership decision making, he stressed.
If senior leaders are to be the key shapers of how AI is used to everything from improve business processes, customer and workforce experiences, then they need to invest in reskilling their workforces, not replacing them, added Summers. “Don’t go out and hire for new skill sets for gaps you have in the organization, reskill your existing employees,” he said. “Use your existing human capital to become more human centric as a culture where you’re building skills across functionalities. Educate internally [on AI] and you can save hundreds of thousands dollars from hiring all these needed skillsets.”
Cultivating AI ethical judgment
AI has many guises. There is the more traditional kind, which detects patterns from data and provides predictive analysis, which has been used for the last few years by some businesses, but isn’t yet mainstream. And then there is the generative kind, which actually creates materials, chat responses, synthesizes data – like ChatGPT. And while it’s the latter kind that has sparked such interest from the general public over the last few months, its fusion with the more traditional AI throws up the most exciting opportunities.
Senior leaders need to pay attention to both types, but shouldn’t get them mixed, stressed Summers. “Now AI has evolved into all parts of human interaction and human decision making at that senior executive level,” he added.
Being AI-informed, not only about the different types, but also how to implement them all ethically, will be a responsibility that falls to senior leaders. And blindly rushing into the world of generative AI will be a mistake.
“One of the things for me, and I’m sure for a lot of other senior leaders, is trusting the machine,” said Berry. “That’s being measured in terms of how we test and what we test, and being aware of the notion of ethics and privacy and ensuring we’re using tools with the right lens.”
For now, regulation around AI remains hazy. While there are laws being whipped up around how employers should ethically and responsibly implement AI around, for example, hiring, so as to ensure job posts aren’t discriminatory in any way (like NYC’s Law 144), there are still relatively few guardrails.
“It’s high stakes: high risk and high reward,” for leaders, stressed Cantrell. “It’s really resting on senior leadership and the board to make sure they’re using it responsibly. And if they don’t understand enough about how it works, its limitations, how can they use it responsibly and apply AI to the right problems, it can potentially scale decisions that are biased for example,” she added.
Organizations are tackling that by introducing AI ethicists, to advise on how to develop robust frameworks of responsibility, and ensure the governance of AI is shared across the C-suite, according to Cantrell. These frameworks pose questions around whether a company is measuring the right things, and are they continually auditing for bias, to ensure fairness and impartiality.
Guarding worker privacy and being transparent about when a company is using AI to collect data on workers, will be critical, she added. “We have to be very, very careful about ensuring that it doesn’t come across as AI surveillance,” she said. That will involve asking workers to opt-in to their data being collected.
Organizations should ensure that privacy is embedded in the core of AI projects from the start Abbatiello advised. Plus, rigorous auditing and monitoring measures should be put in place to maintain the accuracy and fairness of AI outcomes. “If CEOs see AI as generating bias, they must consciously reject these results and evaluate the AI tools they are using to confirm they are the right fit,” he said.
Written by: Jessica Davies, managing editor, for Worklife.