Why Burned-Out Teams Can't Drive AI Adoption in 2026
The Tool Won't Save You. Your Team Might.
I've had a version of the same conversation dozens of times over the past two years. A leader, sometimes a managing partner, sometimes a VP, sometimes a CEO, leans across the table and says a version of the same thing: We implemented the AI platform. We did the training. Six months later, nothing really changed.
They're not wrong, and they're not alone.
What I've noticed, though, is that these leaders are almost always asking the wrong question afterward. They want to know which tool they should have chosen instead. Whether a different platform would have produced different results. Whether the rollout was structured correctly. But the tool is rarely the problem.
What's Actually Getting in the Way
When I work with leadership teams, I can usually tell within the first hour whether an AI initiative is going to take hold. It has almost nothing to do with the technology they've selected. It has everything to do with the emotional state of the team being asked to use it.
Burned-out teams cannot innovate. That sounds simple, but the implications are significant. When people are running on empty—managing impossible workloads, uncertain about their futures, feeling like they're already behind—their nervous systems are in protection mode. And in protection mode, the brain doesn't experiment. It doesn't ask curious questions. It doesn't raise its hand to try something new and risk looking like it doesn't know what it's doing.
Fear does the same thing. Right now, a substantial portion of the workforce is genuinely afraid that AI is coming for their jobs. That fear is understandable. It's also, in many cases, misdirected, but it's real, and it's sitting in the room every time a leader rolls out a new AI initiative and asks their team to engage with it enthusiastically.
Think about what that actually looks like from the team's perspective. You're already overwhelmed. You're already worried about your relevance. And now leadership is asking you to learn a new system, change how you work, and demonstrate competence in something you've never done before—while still hitting all your existing targets. The path of least resistance is to appear compliant and quietly continue doing things the way you always have.
The data reflects what I see in practice. Gallup's 2026 State of the Global Workplace report found that manager engagement has fallen to just 22% globally and that employees whose managers actively champion AI adoption are 8.7 times more likely to say it has genuinely transformed how their work gets done. The manager is the multiplier, but you can't multiply from a place of depletion.
The Real Conversation About AI and Your Profession
I want to speak directly to something that doesn't get said clearly enough, especially in professions like accounting and financial services where the disruption conversation has been particularly loud.
Yes, AI has and will continue to change many industries. Significantly. Probably faster than most people are prepared for. The work that used to fill entire careers—the manual reconciliations, the data entry, the repetitive compliance tasks—is already being automated, and that will only accelerate.
But here's my argument: that work was never the best use of what skilled professionals have to offer.
The work that AI cannot do is the work that actually matters most to clients. Building trust. Asking the questions a client didn't know they needed to be asked. Sitting with someone in a genuinely difficult moment and helping them think clearly. Seeing around corners. Connecting dots across a business that a model can't see because the model doesn't know the person across the table.
That work, the deeply human, relational, strategic work, is not going away. If anything, it's becoming the entire job. And for many people, that's not a loss. That's the work they went into their profession to do in the first place, before the volume of transactional work crowded it out.
The leaders I most admire right now are the ones who are holding both of these truths at the same time. Yes, this is a moment of real disruption. And yes, it is also a genuine opportunity to reshape what their teams spend their time on, to let people work at the level their skills and curiosity have always been pointing toward.
What Leaders Actually Need to Do
If you're a leader trying to get real value from AI investment, let me ask you this: Is your team in a condition to absorb change right now?
Not do they have access to the tool. Not have they completed the training module. But genuinely, do your people feel safe enough to experiment? Safe enough to be beginners? Safe enough to raise their hand and say, I don't know how to do this yet, and I want to figure it out?
Psychological safety is a performance condition. Teams that have it innovate. Teams that don't, regardless of what tools you give them, default to the familiar. Creating that safety starts with how leaders show up.
When a managing partner admits openly that they're still figuring out how to use AI effectively, that changes the permission structure for everyone below them. When a leader celebrates a team member who tried something new and learned from what didn't work, that signals that the culture can hold experimentation. When someone on your team has a particular interest or strength or even obsession that could be applied in a new way—and you actually make space for that—you might be surprised what gets built.
I've seen teams unlock capabilities leadership didn't know existed, simply because someone finally created the conditions for people to bring their whole thinking to the table.
The Opportunity in Front of You
The organizations that will build genuine competitive advantage from AI are not necessarily the ones with the biggest technology budgets or the most sophisticated platforms. They're the ones with engaged teams, present managers, and cultures where people feel secure enough to move forward into something new.
That's always been true of any major shift in how work gets done. The human has always been the variable. What's different now is the stakes are higher and the window is shorter.
Your team has more to offer than the tasks they've been spending their time on. AI gives you a chance to find out what that is. But only if you create the conditions for them to show you.
Frequently Asked Questions: AI Adoption in the Workplace
Why isn't AI improving productivity in most companies?
The technology itself is rarely the problem. A 2025 MIT study found that despite roughly $40 billion in enterprise investment in AI, 95% of organizations have seen zero measurable impact on profits. The reason, in most cases, comes down to the human conditions inside the organization. Burned-out teams cannot absorb change. Disengaged managers cannot champion it. When people are operating in survival mode — overwhelmed, uncertain about their futures, or afraid of looking incompetent — they default to what's familiar regardless of what tools are available to them. AI adoption fails not because of what's on the screen, but because of what's happening in the room.
What is the biggest barrier to AI adoption in the workplace?
Manager engagement is the single most underestimated barrier to successful AI adoption. Gallup's 2026 State of the Global Workplace report found that employees whose managers actively support AI use are 8.7 times more likely to say it has genuinely transformed how their work gets done. Yet globally, manager engagement sits at just 22%. The people responsible for modeling new behaviors, championing new tools, and creating psychological safety for their teams are themselves depleted. No implementation plan compensates for that gap.
How does employee burnout affect AI implementation?
Burnout puts teams into protection mode, which is the opposite of the mindset required for successful adoption of anything new. When people are exhausted and under sustained stress, they stop taking risks, avoid asking questions that might expose gaps in their knowledge, and stick to familiar processes even when better options are available. The result is surface-level compliance — teams appear to be using new tools while quietly continuing to work the old way. This is not resistance. It is self-preservation. And it is one of the primary reasons AI initiatives stall without an obvious explanation.
What role do managers play in AI adoption?
Managers are the multiplier. They determine whether AI becomes a tool their team actually integrates or a platform that gets underused and eventually abandoned. When managers model curiosity — using tools openly, admitting they are still learning, celebrating experimentation rather than penalizing failure — they shift the permission structure for everyone on their team. When they are disengaged, overwhelmed, or privately skeptical, that signal travels just as clearly. Investing in manager engagement and wellbeing is not separate from an AI strategy. For most organizations, it is the AI strategy.
What does psychological safety have to do with AI in the workplace?
Psychological safety is the performance condition that makes adoption possible. Teams that feel safe enough to experiment, ask questions, and be beginners will engage with new technology in the way it was designed to be used. Teams operating in fear of job loss, of failure, of looking incompetent will not. Research consistently shows that psychologically safe teams generate significantly more innovative ideas and take the kind of experimental risks that make new tools actually useful. Before asking which AI platform to implement, the more important question is whether the team has the conditions to absorb something new at all.